Navigating the Architectural Landscape: Where AI Fits (and Doesn’t)


In the field, almost every product story I’ve watched unfold starts with the same opening scene: one big “get-it-live-by-Friday” monolith, followed by a mad scramble to peel away risk as usage soars. The playbook that usually follows is pretty consistent—modular layering, then ports-and-adapters, moving into onion/clean rings, sometimes vertical slices or self-contained systems, a micro-kernel for plug-ins, and only then (if you’re flirting with Fortune-500 traffic) do you even consider the full Netflix-scale microservices circus. AI tools, in my experience, accelerate the boilerplate at each stage but never absolve us of architectural debt—that signature on the release ticket is still ours.


Act I: The Scrappy Monolith – Your MVP’s Best Friend (and Future Headache)

One big “get-it-live-by-Friday” monolith, followed by a mad scramble to peel away risk as usage soars. This is a common scene, rooted in the desire for rapid iteration. It’s one repo, one deploy, with nothing getting in the way of “just ship it.”

Why we still start here: The tight coupling makes early demos rocket-fast. Changes in one spot can appear to ripple through quickly because everything is interconnected. But, as HyperTest’s checklist points out, that same coupling means a single tweak can sideswipe five modules. Their list of eight classic monolith headaches—scalability caps, slow deploys, tangly bug hunts—matches my lived pain point for point.

Copilot to the rescue—sort of: GitHub’s study shows generative AI pairs can slice routine coding time in half, freeing our brains for harder thinking. This is where AI excels in the initial scaffolding phase of a monolith. It can generate that initial code for you and lean towards more extensible and maintainable architectures like onion or clean, or even just basic N-tier. You can benefit from teasing in more sustainable architecture patterns without actually spending the time on them. But if you let the bot just sprinkle clever “quick fixes,” congratulations—you’ve essentially automated the very coupling you’ll soon have to rip out.

The clock starts ticking: Martin Fowler’s “Monolith First” essay absolutely nails it: the moment users love your app is the moment your monolith becomes a liability. If it doesn’t take off, you’ve wasted minimal time, which sounds good. But you really want to be building for value and success. If you build it cheaply and quickly, you’re going to pay back that difference once the demand is there for it to scale. A monolith trying to host a multi-tenant environment with tons of customers coming and going simply won’t work well. You’ll lean heavily on hardware, need incredibly strict processes, and require a lot of people to keep that thing ticking. I’ve seen successful monolith deployments, but they often require dedicated teams of 30 or 40 people for a single solution, with extensive manual QA and regression because the technology wasn’t built for automated testing. Plan your escape route early.

If you go down this path, remember: if it succeeds, the clock is ticking. You’ll need to pivot fast to an architecture that reduces risk while still delivering value. Many companies try to stuff in features after the fact, saying, “Here’s the value add for the additional spend!” In reality, they didn’t invest enough upfront to build it sustainably. There’s a political aspect here too; I’ve seen organizations where there’s an expectation that development is always an immediate, perpetual value add. Marketing says you pay for it once, and it works forever, but the reality is, if you build too fast and too lean, you eventually hit a wall. It wasn’t built for a five-year horizon; it was built to solve a problem today.


Act II: Modular Monolith & Old-School Layers – Carving the Slab

Once your product has value and the clock is ticking on that initial monolith, you need to think about transitioning. Even before a big rewrite, I always recommend quarantining data-access, business logic, and UI in separate assemblies. It might feel like window-dressing, yet it lets me swap infrastructure later with surgical strikes rather than heart surgery. Fowler’s follow-up piece on why “don’t start with a monolith” makes the same case from the other side of the coin – you have to build with the end in mind. You can’t just slap on a new architecture like reframing a house after the siding’s on—it simply doesn’t work.

Think about your product not just as code, but as the process, the automation, the value it brings. If expanding that value means completely replacing the codebase with a new one that better serves that purpose, then it’s a warranted cost. There’s a tipping point where a monolith will just cap out, frustrating loyal customers because the system that once worked is now slow. This is your “stage two of the rocket.” The initial monolith is the big booster getting you off the ground, but once you’re in orbit, you need more intricate moves, more fine-tuning.

Provider swap in practice: In baby steps, you can start to modularize your monolith. Even a traditional N-tier architecture, while not truly modularizing a monolith, is a way to bifurcate from that giant “ball of mud.” You could break it apart by technical function: everything that accesses the database, all the business logic, the user interface. If you realize your backing technology (like the database) isn’t scaling, and you’ve modularized that layer, you only have to refactor that specific layer.

Take EF Core’s SQLite provider, for instance. It can’t handle ALTER TABLE gymnastics or several data types. If you built your dev environments on SQLite “to save cash” but production on SQL Server, the abstracted layer keeps the difference from detonating. This highlights the value of thinking about your technology stack early. If you know you’ll want to pivot from a particular technology if the product takes off, decouple yourself as much as possible. Put an abstraction there so you can easily swap it out later.


Act III: Ports & Adapters (Hexagonal) – The Power of Abstraction

The maturing model of this is ports and adapters, or hexagonal architecture. Alistair Cockburn’s model gives every outside dependency—database, e-mail, payment gateway—its own adapter bolted to a formal port. Here, you take a specific integration—like your database, front-end, or email—and build a contract: a defined set of functions and clear expectations of what goes in and out of that within your codebase. Your core systems don’t directly reference SendGrid, for example; they reference your abstraction (a function or service), which is your “port.” If you want to swap SendGrid for Outlook, you just build a new adapter that adheres to that interface, plug it in, and your core logic stays pure.

AI fit: Perfect spot for code-gen. This is an incredibly useful area for AI. You can tell it: “Here’s the interface, give me a stub adapter for each provider.” Ten minutes of prompting can replace a day of repetitive plumbing. AI can analyze and build scaffolding, highlight unsupported areas, and even scaffold out code to handle situations like returning dummy results in lower environments when a specific function isn’t available. With careful guidance (setting standards for interfaces, naming conventions, etc.), it can bridge those boilerplate gaps across your solution, freeing you to focus on higher-value business problems. No one wants to pay the 150th time for a mail client integration; let AI handle those well-trodden paths. As long as the email gets there and looks good, how the sausage is made, as long as it’s maintainable and auditable, doesn’t really matter.


Act IV: Onion & Clean—Business Rules in the Safe

Moving further inward, we arrive at the core of your solution, which speaks to domain-driven design, clean architecture, and onion architecture. Jeffrey Palermo’s Onion Architecture pushes all frameworks outward so the business core never “knows” about databases or HTTP. Uncle Bob’s Clean Architecture adds a “use-case” ring and screams, “policy in, details out!” Here, once you have a working business model, you identify the core parts—the technology-agnostic pieces, the domain of information where data comes in and out. You model this independent of the technology. This core should be so clear that any business analyst, project manager, or account manager can articulate the business process, and that process should mirror your code structure.

The benefit of this approach is longevity. While code volume goes up, yes, the part that changes least—the domain—becomes bullet-proof for a decade. While you might write more verbose code to keep it independent of the infrastructure, that core code will only ever need to change if the language itself changes. You could write that domain model 15 years ago, and as long as you avoided deprecated features, that code would still be valid today. You could have changed database providers, UIs, Salesforce integrations, and everything in between. As long as the interaction outside the domain doesn’t need to know the specifics of the integration, just that a behavior is being performed and that’s mapped to a provider, that core can live forever. This is for a sustainable core system that you know won’t change for the next five or ten years. If it changes, you’re changing a business process, a behavior. This is precisely why a software solution architect is crucial: to look at what you have, where you want to go, and chart the most risk-averse path to get there seamlessly.


Act V: Vertical Slices & Self-Contained Systems – Risk-Averse Development

Another extremely risk-averse way to develop software is using vertical slices, or what some call self-contained systems (though that’s a bit vague). Jimmy Bogard calls it a vertical slice: take a feature end-to-end—UI, logic, data—and bundle it in one folder. You couple along “the axis of change,” not the UI/domain/infra axis. The idea is to treat each feature as an isolated unit within a broader solution. It can look like a monolith, but it’s really a constellation of tiny monoliths or microservices, except you’re taking the feature end-to-end. Your UI, business logic, data access—everything lives within that slice. It’s a self-contained unit with defined boundaries. Essentially, it’s a very lean, feature-complete MVP that you stack next to other slices. They don’t directly interact, meaning if one feature breaks, it doesn’t impact any others.

String a few slices together, host each as its own tiny web app, and you’ve reached the Self-Contained System (SCS) pattern—independent deployments without the overhead of hundreds of micro-services. This allows for modularity and independent deployment without the full complexity of distributed microservices. Further exploration into the SCS architecture can be found here.


Act VI: The Micro-Kernel Detour – The Plug-in Platform

From here, you can branch into the concept of a microkernel. When the product is basically a platform (think IDE plug-ins, CMS modules, ETL steps), I keep a wafer-thin kernel of shared services—logging, authentication, data access—and load everything else via plug-ins. Martin Fowler slots this under “plugin architecture,” praising its agility when feature volatility is high. It’s essentially the minimal amount of “plumbing” you need to support use-case specific logic. It might be your core infrastructure: everything talks to Entity Framework, everything uses Azure Analytics, all logging goes to a central service. You put all of that into your shared kernel, and your features just interact with a couple of interfaces that give them all that usability.

AI fit: AI can spit out boilerplate for each plug-in, but someone still needs to vet security hooks. As Wired’s cautionary tales on AI-written exploits remind us, even with AI generating the code, the responsibility for its security and correctness falls squarely on your shoulders. When putting together a shared kernel, it’s very abstract. Your business user won’t care what’s in it, but they’ll expect it to surface standardized capabilities that carry forward across all things they build. For example, if your shared kernel handles data access over Entity Framework and you plan to reuse how you implement sorting, paging, and filtering for tables, then every time you build a table, your business user will expect that same level of functionality because you built it once and it’s reused.


Act VII: The Micro-Services Circus – The Scale Myth and Reality

Now, let’s talk about microservices. Netflix’s Simian Army (Chaos Monkey, et al.) made micro-services famous by proving you could survive random server kills in production. Amazon’s pipeline automates 150 million deployments a year, but that luxury sits atop industrial-grade observability, policy engines, and chaos tools. The reality is, most businesses aren’t large enough to demand that level of agility. What’s often missed is the sheer amount of code, orchestration, manual processes (even with automation), documentation, policies, and “red tape” required to orchestrate hundreds of small services, track them, and audit them in real time. For PCI-DSS or NIST shops, every “tiny” service needs its own audit trail, incident playbook, and rate-limit policy—costs many mid-size companies never budget for. Examples of these complexities are often highlighted in articles discussing the pragmatic challenges of microservices, such as this one on the hidden costs or this on their true overhead.

Microservices come with a huge overhead. There’s a sweet spot, for sure, where they make sense, but until you hit that threshold, don’t go crazy. There are middle grounds, more value-oriented ways to organize information for small and medium-sized businesses that still offer benefits for line-of-business applications. Of course, for high-throughput situations where certain things just need to process a lot of information without anything in the way, standing up microservices for ingestion or similar tasks absolutely makes sense.

Rule of thumb: If you can’t dedicate full-time owners to ten or more services, stay at the SCS or vertical slice stage until the revenue graph proves you have to break things up.


AI: Friend, Foe, or Both?

How can newer tools like AI extend these concepts? AI can generate code that helps, but it needs your guidance and oversight.

StageAI Excels at…Watch-outs
Monolith → ModularGenerating test scaffolds & DI plumbingSpaghetti suggestions hide future debt
Ports & AdaptersCranking adapter shells & DTOsMis-mapping 3rd-party edge cases
Onion / CleanBulk interface wiringOver-abstraction of simple flows
Vertical Slice / SCSBoiler-plating small, focused featuresContext window means multiple prompts
Micro-ServicesIaC templates, SLO dashboardsYAML creds, network holes, compliance gaps

The bot is a wrench, not an autopilot; once code lands in main, the pager rings for you, not the language model. You own that code after AI writes it. You have to understand 100% of what it wrote as if you wrote it yourself. It’s just a tool. You can’t point to AI and say, “AI made me do it.”

When you offload these piecemeal parts—the things you, as a senior engineer, already know how to write—to AI, you’re essentially vetting it like you would a pull request from a junior engineer. Where you still need to use your brain is in putting the parts together and designing the overall solution. All you’ve really gained is agility: something that might have taken two weeks to pivot can now take a day. You might generate a class and quickly realize the approach won’t work, but AI did it faster than it would have taken you to reason through the same amount of code. So there’s a definite agility gain.

But it’s crucial to understand: AI isn’t going to write the whole solution for you in a way that you can then turn around and be solely responsible for. I think this is what many startups are missing when they claim you can generate and deploy an entire solution. That’s great, but if you don’t understand what it wrote and no one audits it, and it starts leaking passwords or credit cards into the web console, guess what? You can’t blame the tool. That’s on you. So, don’t put AI as the only thing between you and delivery, because we’re nowhere near that point yet.


Curtain Call

Think of architecture like a multi-stage rocket:

  • Monolith—raw thrust to leave the pad.
  • Modular layers—steering fins.
  • Ports & onions—finesse in orbit.
  • Slices/SCS—mini-thrusters for maneuvering.
  • Micro-kernel—docking ports for new modules.
  • Micro-services—break into satellites when gravity (traffic, compliance, global teams) demands.