<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Agentic AI Insights]]></title><description><![CDATA[Expert insights, practical tips &amp; cutting-edge research on cloud-based Agentic AI dev. Stay ahead with latest news, trends &amp; innovations in cloud &amp; ]]></description><link>https://tech.kingdavidconsulting.com</link><generator>RSS for Node</generator><lastBuildDate>Sun, 12 Apr 2026 16:58:03 GMT</lastBuildDate><atom:link href="https://tech.kingdavidconsulting.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Design Patterns for Agentic AI Systems]]></title><description><![CDATA[Agentic AI refers to autonomous systems that use large language models (LLMs) to perceive, reason, and act in pursuit of goals – often by dynamically calling tools or other software. Teams building these AI agents have found that the most successful ...]]></description><link>https://tech.kingdavidconsulting.com/design-patterns-for-agentic-ai-systems</link><guid isPermaLink="true">https://tech.kingdavidconsulting.com/design-patterns-for-agentic-ai-systems</guid><category><![CDATA[ai agents]]></category><category><![CDATA[design patterns]]></category><category><![CDATA[design and architecture]]></category><dc:creator><![CDATA[King David Consulting LLC]]></dc:creator><pubDate>Mon, 16 Feb 2026 16:18:44 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/2JIvboGLeho/upload/0d612df7a53b8a98677b374a3158ea3a.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Agentic AI</strong> refers to autonomous systems that use large language models (LLMs) to <strong>perceive, reason, and act</strong> in pursuit of goals – often by dynamically calling tools or other software. Teams building these AI agents have found that the most successful implementations rely on <strong>simple, composable patterns</strong> of reasoning and execution, rather than sprawling ad-hoc logic. These recurring <strong>design patterns</strong> provide <strong>reusable frameworks for structuring an AI agent’s cognition and behavior</strong>, making it easier to build systems that are <strong>reliable, transparent, and effective</strong>. Design patterns help manage the complexity of long-running AI tasks by <strong>breaking down problems, guiding tool use, and enabling error recovery</strong>, as observed in real-world deployments.</p>
<p>In practice, an agentic AI’s <strong>“mind” is structured by these patterns</strong>. For example, a well-designed agent might alternate between <strong>thinking and acting in a loop</strong> to incrementally solve problems (<em>a pattern known as ReAct</em>), or it might <strong>plan out a sequence of steps in advance</strong> before execution (<em>plan-and-execute pattern</em>). Many advanced agents even combine multiple patterns – <em>for instance, an AI coding assistant may first</em> <strong><em>plan</em></strong> <em>a solution, then</em> <strong><em>reflect</em></strong> <em>on its code and debug it, while a search engine agent might use a</em> <strong><em>ReAct</em></strong> <em>loop within a larger</em> <strong><em>multi-agent</em></strong> <em>workflow</em>. By understanding these patterns and when to use them, developers can create AI agents that are <em>more robust and easier to maintain</em>.</p>
<p>Agentic AI design patterns can be grouped into two broad categories:</p>
<ul>
<li><p><strong>Conceptual (Behavioral) Patterns:</strong> High-level <strong>reasoning strategies</strong> that dictate how the agent thinks, decides, and learns. These patterns define the agent’s cognitive workflow – how it plans tasks, uses tools, or self-corrects errors. They are often inspired by research but have been adopted in industry to make agents more capable and trustworthy.</p>
</li>
<li><p><strong>Architectural (Code-Level) Patterns:</strong> <strong>Structural and implementation patterns</strong> that organize the agent system. These include how to orchestrate one or many agents, how to manage the agent’s memory/state, and how to integrate tools and external resources. They address the engineering side of building maintainable, scalable agent systems.</p>
</li>
</ul>
<p>Below, we discuss the key design patterns in each category, with descriptions and real-world examples. A summary table for each category is provided to encapsulate the patterns, their purpose, and example implementations.</p>
<hr />
<h2 id="heading-conceptual-design-patterns-reasoning-amp-behavior">Conceptual Design Patterns (Reasoning &amp; Behavior)</h2>
<p><strong>Conceptual patterns</strong> describe <strong>how an AI agent reasons through problems and decides on actions</strong>. Rather than leaving the LLM to figure everything out unprompted, these patterns impose a structured approach to reasoning – which in turn leads to more reliable and interpretable behavior. Modern LLM-based agents often use one or several of these patterns:</p>
<h3 id="heading-react-reason-act-pattern"><strong>ReAct (Reason + Act) Pattern</strong></h3>
<p><strong>Description –</strong> <em>ReAct</em> is a foundational reasoning pattern where an agent <strong>interleaves thought and action in a loop</strong>. At each step, the agent <strong>thinks</strong> (generates a reasoning trace to decide what to do), then <strong>acts</strong> (executes an action such as calling a tool or API), then <strong>observes</strong> the result, and repeats this cycle. This iterative <strong>Thought → Action → Observation</strong> loop continues until the task is solved or an answer is produced. By explicitly reasoning at each step and using the environment’s feedback, the ReAct pattern helps <strong>ground the agent’s decisions in actual observations, reducing hallucination and error rates</strong> compared to one-shot answers. ReAct was introduced by researchers in 2022, but quickly proved its practical value and is now a <strong>go-to pattern for building tool-using AI agents</strong>.</p>
<p><strong>Industry Use –</strong> ReAct’s step-by-step approach is well-suited for <strong>open-ended tasks that require multiple reasoning steps or external information</strong>. It has become the default in many agent frameworks and products. For example, <strong>LangChain’s standard agent</strong> is built on the ReAct loop – the LLM decides at each step which tool to use (search engines, calculators, databases, etc.), executes it, and uses the tool’s output to guide the next thought. Early web-connected QA systems like <strong>WebGPT</strong> (OpenAI) followed a ReAct-like process of thinking and searching in turns. Likewise, the popular open-source <strong>AutoGPT</strong> project uses an inner loop of reasoning and tool calls to iteratively move towards a goal (e.g., continually analyzing its progress and deciding the next action such as web browsing or code execution). ReAct remains popular because <strong>it’s simple yet powerful</strong> – as a Wollen Labs analysis notes, it’s often an ideal default when you need an LLM to use tools or handle multi-step queries without having to pre-plan the entire solution.</p>
<h3 id="heading-self-reflection-critique-amp-refinement-pattern"><strong>Self-Reflection (Critique &amp; Refinement) Pattern</strong></h3>
<p><strong>Description –</strong> In the <em>self-reflection</em> pattern, an agent is designed to <strong>critically evaluate its own outputs and refine them</strong>. Instead of delivering its first answer without question, the agent first generates a solution, then <strong>shifts into a “critic” mode to inspect that result</strong>, looking for errors, validity issues, or ways to improve the answer. If it identifies a problem, the agent goes back and <strong>adjusts its reasoning or tries an alternate approach</strong>, effectively performing a self-guided “second draft.” This may repeat for several iterations until the agent is satisfied or an iteration limit is reached. Reflection mitigates the tendency of LLMs to <strong>commit to an answer too quickly</strong> – by introducing a pause for self-critique, the agent can catch mistakes (factual inaccuracies, unsatisfied constraints, buggy code, etc.) before presenting a final output. This pattern is especially useful in scenarios where <strong>accuracy and reliability are more important than speed</strong>, allowing the agent to correct itself much like a human reviewing their work.</p>
<p><strong>Industry Use –</strong> Many practical AI agents employ reflection to boost quality. For instance, <strong>AI coding assistants</strong> use self-refinement to reduce errors: an agent can draft code, run a test or review on it, then notice a bug or a failed test and fix its code accordingly. The AI might even generate unit tests for its own code to validate correctness. <em>Anthropic’s Claude</em>, in particular, has been noted to use a form of this pattern – the Claude Code assistant can internally “red team” (self-check) its code for vulnerabilities like SQL injection and correct them, essentially acting as its own first reviewer. In the realm of text generation, <strong>content creation bots</strong> do something similar by producing an initial draft, then evaluating it against style guidelines or factual references and revising problematic sections. The Reflection pattern (also dubbed <strong>“Reflexion”</strong> in some literature when it involves learning from past mistakes across multiple attempts) was highlighted by Microsoft researchers as a key to <em>self-correcting agents</em>, and is increasingly common in industry for any task where the cost of a mistake is high.</p>
<h3 id="heading-planning-plan-and-execute-pattern"><strong>Planning (Plan-and-Execute) Pattern</strong></h3>
<p><strong>Description –</strong> The <em>Planning</em> pattern, often implemented as <strong>Plan-and-Execute</strong>, has an agent <strong>formulate a structured plan of action before diving into execution</strong>. In this approach, the agent uses its reasoning abilities in a dedicated <strong>planning phase</strong> to break a complex goal into sub-tasks or steps, creating a high-level game plan. Only once the plan (for example, a list of steps) is ready does the agent enter an <strong>execution phase</strong>, carrying out each step in sequence (and possibly re-planning if something unexpected occurs). This two-phase approach forces the agent to <strong>think ahead</strong> about the overall solution path, which can prevent myopic actions and provide a clear direction for complex tasks. Planning is particularly useful when tackling <strong>long-horizon tasks with multiple dependencies</strong>, because it helps the agent maintain focus on the end goal and systematically work through sub-goals. Compared to the reactive ReAct loop, planning incurs an upfront cost (one or more planning prompts) but can be more <strong>efficient for well-structured problems</strong>, since the agent doesn’t need to rethink its strategy from scratch at every step.</p>
<p><strong>Industry Use –</strong> Many multi-step AI workflows now use plan-and-execute variants. For example, <strong>Microsoft’s HuggingGPT</strong> (2023) acted as a <em>planner</em> that would interpret a user’s request and generate a plan to invoke various AI models in sequence (for tasks like “create a video from a prompt”). The open-source project <strong>BabyAGI</strong> similarly maintains a dynamic task list: it creates new tasks and reprioritizes them as it works, which is effectively a continuous planning loop. In the realm of software engineering automation, <strong>GPT-Engineer</strong> and <strong>MetaGPT</strong> (2023–2024) both leveraged explicit planning: GPT-Engineer generates a project “spec” and plan before writing code, and MetaGPT assigns different “roles” (like Architect, Coder, Tester – each an agent) to handle complex coding projects collaboratively according to a plan. Planning is a natural fit for any domain where the solution can be outlined as a series of steps – e.g. <strong>business workflow automation</strong>, where an agent might plan steps like <em>“gather client requirements → run analysis A → generate report → send email with results”</em> before executing them. Many advanced agents actually combine <em>Planning</em> with the Reflection pattern: first plan the approach, then use a self-critique loop to verify each step’s outcome and adjust the plan if necessary.</p>
<h3 id="heading-tool-use-tool-integration-pattern"><strong>Tool Use (Tool-Integration) Pattern</strong></h3>
<p><strong>Description –</strong> The <em>Tool Use</em> pattern enables an AI agent to <strong>extend its capabilities by interacting with external tools and services</strong> as part of its reasoning process. In practice, the agent is provided with a set of <strong>tool interfaces</strong> (for example: web search, calculators, databases, code execution, custom APIs) that it can call via specially formatted outputs. At each decision point, the agent can choose to invoke a tool, supply it with input, and then use the tool’s output to inform subsequent reasoning. This pattern is what gives many agentic AIs access to <strong>up-to-date information and real-world actions</strong> beyond their trained knowledge. Tool use is often combined with the ReAct loop (Thought→Action→Observation) – in fact, the name “ReAct” itself highlights reasoning coupled with actions, and “actions” usually mean tool calls. The key design aspect of this pattern is defining the tools and their usage format clearly to the agent (often in its system prompt) so the agent knows <em>when and how</em> to call them. Proper tool integration can dramatically improve an agent’s effectiveness by allowing it to fetch data, execute code, or delegate subtasks that the LLM can’t handle alone.</p>
<p><strong>Industry Use –</strong> Integrating tools with LLM agents is now standard practice. <strong>OpenAI’s ChatGPT Plugins</strong> and <em>function calling API</em> (2023) are prime examples of the tool use pattern: they let the LLM decide to call functions (tools) like web browsers, calculators, or booking systems by outputting a JSON snippet, which the host system executes. This capability allows ChatGPT-based agents to, for instance, look up current stock prices, retrieve documents, or control home automation devices on behalf of the user. <strong>LangChain</strong>, a library popular among developers for building AI agents, provides a large collection of tools (Google search, Python interpreter, database connectors, etc.) and frameworks like <strong>AgentExecutor</strong> to simplify tool calls. Developers specify a tool’s interface (description, input/output format), and the agent’s LLM <strong>chooses when to use those tools</strong> to answer user requests. Another example is <strong>Microsoft’s Jarvis (HuggingGPT)</strong>, which coordinated calls to various machine-learning models (for image generation, speech recognition, etc.) as tools, all directed by a central LLM planner. In summary, the Tool Use pattern is what transforms an LLM from a static chatbot into a dynamic agent that can interact with software and the world.</p>
<h3 id="heading-multi-agent-collaboration-delegation-pattern"><strong>Multi-Agent Collaboration (Delegation) Pattern</strong></h3>
<p><strong>Description –</strong> The <em>Multi-Agent</em> pattern (also called <strong>delegation or cooperative agents</strong>) involves <strong>multiple agents working together on different aspects of a problem</strong>, often under the guidance of a <em>coordinator</em> agent. Rather than a single AI handling everything, each agent can be specialized – for example, one might be a Planner agent, another a Research agent, another a Critic or Executor, etc. The agents communicate and pass tasks among themselves, forming an <strong>autonomous team of AIs</strong> somewhat analogous to a human team with distinct roles. One common architecture is a <strong>hierarchical delegation</strong>: a <em>manager</em> agent decomposes the goal into sub-tasks and assigns them to worker agents, then integrates their results. Another approach is a <strong>decentralized collaboration</strong> (sometimes likened to a “swarm” of agents) where multiple agents interact and refine ideas without a single leader, though this can be harder to control. The benefit of multi-agent collaboration is <strong>scalability and specialization</strong> – complex tasks can be split into manageable pieces, and each sub-agent can use specialized prompts, tools, or even different model types best suited for its subtask. However, coordinating multiple agents adds overhead and complexity (for example, deciding how they communicate, preventing infinite back-and-forth, and merging results).</p>
<p><strong>Industry Use –</strong> Multi-agent systems are actively used in industry whenever tasks are too complex for a single agent or require diverse skills. For instance, <strong>Perplexity AI’s</strong> production search assistant is reported to use a form of multi-agent orchestration: one agent focuses on searching and retrieving relevant information, then passes it to another agent that formulates a coherent answer, with a final agent verifying facts – all orchestrated by a top-level LLM controller. In software development, the open-source <strong>MetaGPT</strong> project spawns several GPT-4 agents with different roles (Project Manager, Architect, Coder, Tester) to collaboratively build software – exemplifying how delegation can mirror a real-world team. Likewise, <strong>HuggingGPT</strong> used a central GPT-4 to delegate tasks to specialist AI models (for vision, speech, etc.), effectively creating a multi-agent tool-using system for complex multimodal queries. Even where only one LLM agent is present, it might internally simulate multiple “personas” or reasoning threads that debate or collaborate (an approach used in some chatbot implementations). Multi-agent patterns are powerful, but due to their complexity, many teams will start with a single-agent system and introduce additional agents only as needed for scalability.</p>
<h3 id="heading-human-in-the-loop-oversight-hybrid-humanai-pattern"><strong>Human-in-the-Loop Oversight</strong> (Hybrid Human/AI Pattern)</h3>
<p><strong>Description –</strong> Ensuring <strong>human oversight</strong> is a design pattern often employed when absolute reliability or safety is required. In a human-in-the-loop pattern, an AI agent may handle a task autonomously <strong>up to a point, but will pause at a checkpoint and request human input or approval</strong> before proceeding or finalizing its output. This can be implemented by inserting explicit approval steps in the agent’s plan (for example, “If transaction amount &gt; $1000, ask a human for review” or “Before publishing content, get editor approval”). The human-in-the-loop pattern reduces risk by having a person correct the agent’s mistakes or make judgment calls on ambiguities. The downside is that it reduces automation and speed, so it’s typically reserved for cases where <strong>errors are costly</strong> or where ethical and legal constraints demand a human decision (e.g. medical diagnosis, financial investments, content moderation).</p>
<p><strong>Industry Use –</strong> Many real-world “autonomous” systems quietly incorporate human oversight. <strong>Customer support bots</strong> often escalate to a human representative if they detect certain triggers (like frustration, or a request beyond their authority). <strong>Document processing agents</strong> that draft responses or contracts might require a human manager’s sign-off before any sensitive communication is sent out. In software development, an AI code-generation agent could be configured to always seek human approval before merging code changes into a production repository. Regulators and industry best practices in areas like healthcare, finance, and law often <em>mandate</em> human review of AI decisions, so designing an agent with built-in human-in-the-loop checkpoints is a common pattern to ensure compliance. For example, an AI medical diagnostic agent may provide a recommendation but a doctor must approve the final diagnosis, or an AI content generator on a news site might prepare an article draft that stays in a queue until an editor reviews and approves it. This pattern is not about improving the AI’s capabilities per se, but about <strong>integrating AI into real-world workflows responsibly</strong> by combining strengths of AI (speed, scale) with human judgment.</p>
<p><strong>Table 1</strong> below summarizes the <strong>Conceptual (Reasoning) Patterns</strong> and highlights example implementations:</p>
<p><strong>Table 1 – Conceptual Design Patterns for Agentic AI (Reasoning &amp; Behavior)</strong></p>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Pattern Name</strong></td><td><strong>How It Works</strong></td><td><strong>Industry Example(s)</strong></td></tr>
</thead>
<tbody>
<tr>
<td><strong>ReAct (Reason + Act)</strong></td><td>Agent alternates between <strong>thinking (reasoning in natural language)</strong> and <strong>acting (executing a tool or environment action)</strong> in a loop, using each observation to inform the next thought. Enables dynamic, step-by-step problem solving with tool use, improving transparency and reducing hallucinations by grounding answers in observed facts.</td><td><strong>LangChain Agents:</strong> Default ReAct-style loop where an LLM chooses tools and reacts iteratively.<br /><br /><strong>AutoGPT:</strong> Uses a ReAct-style loop to autonomously perform tasks (e.g., web browsing + analysis) until goals are met.</td></tr>
<tr>
<td><strong>Self-Reflection</strong><br /><em>("Critic &amp; Revise")</em></td><td>The agent <strong>critiques its own output</strong> and refines it through one or more iterations. After an initial answer, the agent checks for errors or improvements, optionally referencing memory or feedback, then revises its solution. Improves accuracy at the cost of extra computation.</td><td><strong>Claude &amp; ChatGPT:</strong> Internally analyze and rewrite responses to correct errors or policy violations.<br /><br /><strong>GitHub Copilot:</strong> Generates code, reviews it for bugs or security flaws, then revises suggestions.</td></tr>
<tr>
<td><strong>Planning</strong><br /><em>("Plan-and-Execute")</em></td><td>The agent <strong>decomposes tasks into a structured plan</strong> before execution. Often split into a <strong>Planner</strong> and <strong>Executor</strong>, with possible dynamic re-planning. Ensures long-horizon tasks are handled methodically.</td><td><strong>HuggingGPT (Microsoft):</strong> Planner breaks a request into sub-tasks and invokes specialized models.<br /><br /><strong>BabyAGI:</strong> Maintains and continuously updates a task list to pursue goals.</td></tr>
<tr>
<td><strong>Tool Use</strong><br /><em>("Tool-Integration")</em></td><td>The agent <strong>invokes external tools or APIs</strong> during reasoning to fetch data, compute results, or perform actions. Tools are accessed through defined interfaces and used when needed. Extends capabilities and grounds outputs in real-world data.</td><td><strong>OpenAI Functions / Plugins:</strong> Enable API calls such as search or booking.<br /><br /><strong>LangChain Toolkit:</strong> Provides tools like web search, Python execution, and custom APIs.</td></tr>
<tr>
<td><strong>Multi-Agent Collaboration</strong><br /><em>("Delegation")</em></td><td><strong>Multiple agents specialize and cooperate</strong> on sub-tasks, communicating via a protocol. Coordination may be centralized or decentralized. Enables complex problem-solving but requires orchestration to avoid loops.</td><td><strong>Perplexity AI (Pro):</strong> Uses retrieval, synthesis, and fact-checking agents together.<br /><br /><strong>MetaGPT:</strong> Spawns multiple role-based agents (Engineer, Reviewer) to build software collaboratively.</td></tr>
<tr>
<td><strong>Human-in-the-Loop</strong></td><td>The agent includes <strong>checkpoints for human review or input</strong>. It may pause for approval or hand off uncertain or high-stakes decisions to a human. Ensures oversight and compliance.</td><td><strong>Customer Support Bots:</strong> Escalate complex cases or high-value refunds to humans.<br /><br /><strong>Content Generation:</strong> Human editors approve AI-generated articles before publishing.</td></tr>
</tbody>
</table>
</div><hr />
<h2 id="heading-architectural-amp-code-level-patterns-implementation-amp-orchestration">Architectural &amp; Code-Level Patterns (Implementation &amp; Orchestration)</h2>
<p>While conceptual patterns govern <em>how the AI agent thinks</em>, <strong>architectural patterns</strong> cover <em>how the overall system is structured and executed in code</em>. These patterns address questions like: should you use one agent or many? How do you organize a sequence of LLM calls and tool invocations? How does the agent remember information between steps? Here we outline key architectural design patterns for agentic AI, along with examples of their usage:</p>
<h3 id="heading-single-agent-vs-multi-agent-architecture"><strong>Single-Agent vs. Multi-Agent Architecture</strong></h3>
<p>A fundamental decision is whether to build a <strong>single-agent system</strong> or a <strong>multi-agent system</strong>. In a <strong>single-agent architecture</strong>, one LLM (plus its tools) handles the entire task within a single, continuous reasoning loop. This is simpler to implement and debug, since all logic is in one place. <strong>Most current AI agents are single-agent by default</strong>, and they can already handle many complex tasks by using tools and patterns like ReAct or planning within one agent’s context. A <strong>multi-agent architecture</strong> uses several LLM agents (or LLMs paired with tools) working in concert, typically with a top-level orchestrator to coordinate them. Multi-agent systems shine for <strong>very complex or interdisciplinary problems</strong> where different sub-agents can tackle different subtasks or where parallelism is needed. However, they introduce extra complexity in communication and state sharing. Industry experience shows it’s wise to <strong>“start simple” with one agent</strong>, and only move to a multi-agent design if a single agent is hitting limitations. For example, if you find one agent is struggling to handle all required tools or has to juggle very different skills, that might be a sign to split responsibilities among multiple specialized agents.</p>
<h3 id="heading-orchestration-patterns-workflow-structuring"><strong>Orchestration Patterns (Workflow Structuring)</strong></h3>
<p>If your agent’s task involves multiple steps or multiple agents, <em>how do we coordinate the process?</em> <strong>Orchestration patterns</strong> describe common ways to structure the flow of an agent or multi-agent system. Several patterns are prevalent:</p>
<ul>
<li><p><strong>Deterministic Sequence (Pipeline):</strong> A fixed, linear sequence of operations or model calls, where each step’s output feeds into the next. This is essentially a <em>hard-coded workflow</em>, not dynamic decision-making by the agent. It’s suitable for well-defined processes (for example: retrieve data → summarize → format output) that never veer off script. <em>Industry example:</em> Many <strong>Retrieval-Augmented Generation (RAG)</strong> systems for Q\&amp;A use a simple pipeline: first retrieve documents, then pass them with the query to an LLM for answering. Pipelines are easy to audit and fast, but not flexible with changing requirements or unexpected inputs.</p>
</li>
<li><p><strong>Dynamic Loop (Iterative Refinement):</strong> A loop where an agent (or a pair of agents in a generator-critic duo) <strong>repeats a cycle of steps until a condition is met</strong>. This could be an internal loop within a single agent (like the ReAct reasoning loop or a self-refinement loop that continues until a solution is validated), or a loop between multiple agents (e.g. one agent proposes an answer, another evaluates it, and they iterate). <em>Industry example:</em> <strong>Automated software debugging</strong> can be done with an iterative loop: an AI writes code, tests it, then debugs based on failures, repeating until tests pass. Similarly, a <strong>planning agent</strong> might continually update a task list as new goals emerge (as in BabyAGI’s task loop). Looping patterns are powerful for allowing continuous improvement and adaptability, but developers must implement safeguards (like max iterations or timeouts) to prevent infinite loops.</p>
</li>
<li><p><strong>Parallel Branching (Concurrent Tasks):</strong> An orchestration where <strong>multiple sub-tasks or agents run in parallel</strong>, and their results are combined at the end. This is useful for speeding up tasks by exploiting parallelism or obtaining <em>multiple perspectives at once</em>. <em>Industry example:</em> A complex <strong>business intelligence agent</strong> might fork into parallel branches – one agent analyzes sales data, another monitors social media trends – and then a final process merges insights into a single report. Another example is using an ensemble of agents to independently research a question and then aggregating their answers to increase accuracy. Parallel orchestration can reduce overall latency, but requires a way to merge or reconcile outputs and can consume more resources.</p>
</li>
<li><p><strong>Hierarchical Delegation:</strong> A multi-agent orchestration where a <strong>central “manager” agent dynamically delegates tasks</strong> to one or more <em>worker</em> agents and coordinates the results. This is essentially a runtime planner-executor system: the manager interprets the user’s request, breaks it into pieces, then might even spawn new agents (or call different services) to handle each piece. After each subtask, the manager evaluates results and may assign new tasks or adjust the plan. <em>Industry example:</em> <strong>HuggingGPT</strong> had GPT-4 as a top-level controller that would create subtasks and call various AI models (as tools) to address each part, then synthesize an answer. Similarly, some AI assistants use a manager agent that decides when to ask a knowledge-base Q\&amp;A agent versus when to consult a calculator or when to request human help. The benefit is extreme flexibility – the workflow is decided on-the-fly by the AI – but the challenge is that the <em>prompt for the manager agent must clearly define how to make these decisions</em>, and debugging such systems can be difficult.</p>
</li>
<li><p><strong>Decentralized Cooperation (Swarm):</strong> An advanced pattern where <strong>multiple agents freely communicate and collaborate without a single point of control</strong>. For example, agents might message each other, ask each other questions, and vote on answers. This is analogous to a team meeting where ideas are discussed and refined among peers. While mostly experimental, companies have explored using swarms of agents to generate creative ideas or to do complex analyses by consensus. <em>Example:</em> In 2023, Google’s DeepMind described using multiple agents debating each other’s answers to improve factual accuracy (the <em>“society of minds”</em> approach). Swarm orchestration can yield rich results, but it’s the most complex to implement and prone to <strong>chatter or deadlocks</strong> if not carefully constrained. In practice, this is less common in industry compared to the manager/worker model.</p>
</li>
</ul>
<h3 id="heading-memory-management-pattern"><strong>Memory Management Pattern</strong></h3>
<p>LLM-based agents don’t have persistent memory of past interactions unless we provide it. The <strong>memory management pattern</strong> is about how an agent <strong>stores and retrieves information over time</strong> to maintain context across long tasks. In practical terms, agent memory is often divided into <strong>short-term memory</strong> (the information in the LLM’s active context window, which is limited) and <strong>long-term memory</strong> (information saved to an external store that can be fetched when needed). Common approaches include:</p>
<ul>
<li><p><strong>Summarization Buffer:</strong> The agent keeps a rolling summary of earlier conversation turns or task progress and prepends that summary in the prompt when the raw history gets too long. This condenses past context to fit the context window.</p>
</li>
<li><p><strong>Vector Database Memory:</strong> Key facts, prior results, or entire documents can be embedded into high-dimensional vectors and stored in a vector database. When needed, the agent finds relevant items by semantic similarity search and injects them into context. For example, an agent might store everything it learns about a project in a Pinecone or Weaviate vector store; later, when a new question arises, it retrieves the most relevant pieces of that stored knowledge to inform its answer.</p>
</li>
<li><p><strong>Knowledge Base &amp; Retrieval-Augmented Generation (RAG):</strong> This is a variant of the tool-use pattern: the agent has access to a <strong>document retrieval system</strong> (like ElasticSearch or a corporate wiki) and can ask it for information. In effect, the agent’s “memory” is an external knowledge base that it queries as needed. This ensures the agent’s knowledge stays up-to-date without retraining the model. Many enterprise agents use RAG as a memory mechanism – for instance, a customer service agent might retrieve a customer’s profile and past tickets from a database when the customer asks a question.</p>
</li>
<li><p><strong>Persistent Storage &amp; State Files:</strong> Some agents write intermediate results or state to files or databases during their operation. This can include writing a draft output to a file, logging completed sub-tasks, or noting progress. Persisting state allows the agent to be paused and resumed, or to recover from errors without starting over. In software automation, an agent might maintain a “to-do list” file on disk that it updates as it completes tasks (as in certain AutoGPT variants). As one industry guide noted, even using simple files can be an effective first approach to agent memory, leveraging the fact that LLMs have been trained on reading/writing files and code. Over time, teams may migrate to databases for more robust concurrent access and query capabilities as their agent’s memory grows or needs to be shared.</p>
</li>
</ul>
<p>Efficient memory management is crucial for long-running agents – it prevents the LLM from “forgetting” important details and avoids overloading the context with irrelevant data. Many frameworks (like LangChain or LlamaIndex) provide memory components to handle summaries or vector-based retrieval. For instance, default <strong>AutoGPT</strong> setups in 2023 used a Pinecone vector database for long-term memory, enabling the agent to remember facts across runs (though newer versions explored local file-based memory as well). By 2025, numerous vendors (including Oracle and others) have published guidelines on choosing memory architectures (comparing vectors vs. graphs vs. relational DBs) to help developers design scalable agent memory. The consensus is to start simple (even plain files or JSON logs for single-user agents) and only move to complex memory stores when needed.</p>
<h3 id="heading-error-handling-and-safety-guardrails"><strong>Error Handling and Safety Guardrails</strong></h3>
<p>Autonomous agents must be constructed with <strong>robust error handling and safety mechanisms</strong> to be viable in production. This is less a single pattern and more a set of best practices that should be baked into an agent’s design:</p>
<ul>
<li><p><strong>Retry and Fallback Logic:</strong> Agents often call external tools and APIs, which can fail or return unexpected results. A well-designed agent catches errors (e.g., tool exceptions or timeouts) and <strong>implements fallback strategies</strong>. For example, if a web search query fails, the agent could try a backup search API, or if an API returns an error, the agent can reformat the query and retry a limited number of times before giving up. This prevents the entire agent from crashing due to one failed step.</p>
</li>
<li><p><strong>Iteration Limits &amp; Timeouts:</strong> To avoid infinite loops (a risk whenever an agent reasons in a loop or multiple agents call each other), developers set bounds – e.g., maximum iterations or a watchdog timer to stop the agent after a certain duration. In practice, AutoGPT and similar systems implemented user-defined limits on how many cycles the agent could go through before pausing for user confirmation. These controls ensure the agent doesn’t run amok consuming resources or getting stuck chasing a wrong objective.</p>
</li>
<li><p><strong>Validation and Policy Enforcement:</strong> Many production systems add explicit <strong>guardrail checks</strong> on the agent’s outputs. This could be as simple as validating the format of an answer (e.g., ensure a JSON output is valid JSON) or as complex as running a content filter on the agent’s response to filter out any policy violations (hate speech, privacy leaks, etc.). In tool-using agents, it’s also common to sandbox dangerous actions – for instance, an agent allowed to execute Python code might run in a restricted environment, and certain sensitive operations (file deletion, external network calls) might be disallowed or require special authorization. Such guardrails are critical when deploying agents in enterprise settings, preventing costly mistakes and ensuring compliance with regulations.</p>
</li>
</ul>
<p>In summary, code-level patterns ensure that an agent’s implementation is organized and resilient. For instance, a single-agent ReAct loop might be embedded in a larger <strong>deterministic workflow with human-in-the-loop oversight and robust error-handling</strong> – combining predictability with flexibility. The table below summarizes the main architectural patterns and practices:</p>
<p><strong>Table 2 – Architectural Design Patterns &amp; Practices for Agentic AI</strong></p>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Pattern / Practice</strong></td><td><strong>Key Idea</strong></td><td><strong>Example Implementations</strong></td></tr>
</thead>
<tbody>
<tr>
<td><strong>Single-Agent</strong></td><td>One LLM-driven agent (with tools) handles the entire task in a single loop. Simpler to build; suited for many tasks especially within one domain. Requires prompts to cover all needed behaviors.</td><td><strong>ChatGPT with Plugins:</strong> One GPT‑4 instance uses various tools (web browsing, code execution) to answer queries end-to-end.<br /><br /><strong>AutoGPT (single instance):</strong> An autonomous agent that iteratively calls itself and tools to achieve the user’s goal without spawning other agents.</td></tr>
<tr>
<td><strong>Multi-Agent</strong></td><td>Multiple LLM agents cooperate, often with a coordinator agent orchestrating specialized worker agents. More modular and scalable for complex tasks, but adds overhead in communication and integration.</td><td><strong>HuggingGPT (Microsoft, 2023):</strong> GPT‑4 acted as a manager, delegating to expert models (vision, speech, etc.) and combining outputs.<br /><br /><strong>Generative Agents (Stanford, 2023):</strong> Simulated a community of agents communicating to accomplish tasks, showcasing decentralized multi-agent interactions.</td></tr>
<tr>
<td><strong>Sequential Workflow</strong><br /><em>("Pipeline")</em></td><td>A predefined linear sequence of steps or model calls. No agent decision-making about flow—each step always follows the previous one. Best for <strong>deterministic tasks</strong> (e.g., fixed RAG or ETL pipelines) where flexibility isn’t required.</td><td><strong>Standard RAG Pipeline:</strong> Always retrieve documents, then pass them to the LLM for answering; common in production QA bots.<br /><br /><strong>ETL Processes:</strong> LLMs and tools run in fixed order (extract → transform → load) for predictable data workflows.</td></tr>
<tr>
<td><strong>Iterative Loop</strong></td><td>A reasoning or interaction cycle repeats until completion criteria are met. Enables incremental improvement or repeated checking. Can exist within one agent (e.g., ReAct, self-critique) or across multiple agents. Must include exit conditions to prevent infinite loops.</td><td><strong>Self‑Debugging Code Agent:</strong> Writes code, tests it, and debugs repeatedly until tests pass (e.g., <strong>Voyager</strong>, <strong>ChatGPT Code Interpreter</strong>, 2023).<br /><br /><strong>Reflexion Agents:</strong> Use errors as feedback to retry tasks differently on subsequent loops (Microsoft research, 2023).</td></tr>
<tr>
<td><strong>Parallel Branching</strong></td><td>The system splits into <strong>parallel tasks</strong> that run concurrently and later merge results. Improves speed or explores multiple solution paths simultaneously. Requires aggregation or summarization logic.</td><td><strong>IBM Watson Discovery (2024):</strong> Issued multiple parallel searches and aggregated findings to answer complex queries.<br /><br /><strong>Ensemble QA (e.g., Bing Chat, 2023):</strong> Runs multiple agents or prompts in parallel and synthesizes the best answer.</td></tr>
<tr>
<td><strong>Hierarchical Delegation</strong></td><td>A <strong>manager agent dynamically delegates</strong> sub-tasks to worker agents or services and assembles results. Workflow adapts on the fly based on intermediate outputs.</td><td><strong>MosaicML AGI Orchestration (2023):</strong> Manager-worker pattern spawning specialized agents (math, search, etc.) for better complex-task performance.<br /><br /><strong>LangChain Multi‑Action Agents:</strong> Allow agents to call tools or other agents as subroutines in a flexible hierarchy.</td></tr>
<tr>
<td><strong>Memory Persistence</strong></td><td>The agent uses <strong>external memory</strong> (files, databases, vector stores) to retain information across steps or sessions. Enables long-term context beyond the LLM’s window and continuity across runs.</td><td><strong>AutoGPT (with Pinecone):</strong> Stores facts and objectives in a vector database for later recall.<br /><br /><strong>Salesforce AI Customer Agent (2024):</strong> Persists conversation state and customer data in CRM systems to maintain continuity.</td></tr>
<tr>
<td><strong>Error Handling &amp; Safeguards</strong></td><td>The system includes <strong>error-catching, fallbacks, and safety checks</strong>: limiting loops, validating outputs, handling tool failures, and enabling human overrides for critical actions. Essential for production reliability.</td><td><strong>Tool‑Use Agents (OpenAI API):</strong> Use try/except around function calls; errors are returned to the model for self-correction.<br /><br /><strong>Enterprise AI Assistants:</strong> Employ safety harnesses—human confirmation for risky actions and content filters with fallback responses.</td></tr>
</tbody>
</table>
</div><hr />
<h2 id="heading-conclusion-amp-best-practices">Conclusion &amp; Best Practices</h2>
<p>Design patterns provide a toolbox for building <strong>effective agentic AI systems</strong>. Rather than coding each agent from scratch, developers can leverage these proven patterns – or even use libraries (like <strong>LangChain</strong> or <strong>AgentPatterns</strong> library which offers several ready-made agent patterns) – to bootstrap their agent development. When choosing patterns for your project, consider the problem complexity and requirements:</p>
<ul>
<li><p><strong>Start Simple, Then Add Complexity:</strong> It’s usually best to begin with the simplest approach that could work. For example, if a single prompt with retrieval (a deterministic RAG chain) answers your question, you may not need a full agent. If you do need an agent, a single-agent ReAct with a few tools is a good starting point for many use cases. Only introduce additional patterns (like planning or multiple agents) when the simpler setup fails to produce satisfactory results or can’t handle the scope of the task. Unnecessary complexity can make agents harder to debug and slower.</p>
</li>
<li><p><strong>Match Patterns to Problems:</strong> Different tasks call for different strategies. If the task involves <strong>open exploration or uncertainty</strong>, a ReAct-style or even a Tree-of-Thoughts approach might be appropriate. If the task is <strong>well-structured and lengthy</strong>, a Planning (plan-and-execute) pattern could work best. For tasks requiring <strong>high accuracy</strong>, consider adding Reflection so the agent can self-correct. For tasks that span <strong>multiple domains or skill sets</strong>, a Multi-agent delegation pattern may be effective. Refer back to the pattern tables above for “best for” guidance (and notice how many implementations actually use several patterns together).</p>
</li>
<li><p><strong>Combine Patterns for Synergy:</strong> In practice, many robust agent systems <strong>mix and match patterns</strong> rather than relying on just one. For instance, an agent may use a Planning phase to outline a solution, then enter a ReAct loop to execute each step, and finally invoke a Reflection loop to verify the result. Or you might have a mostly sequential pipeline with a <em>single “agent step”</em> in the middle that uses a ReAct loop to handle a particularly unpredictable part of the process. Don’t hesitate to compose patterns as needed – but do so in a controlled way (for example, ensure that if you have multiple agents or loops, you have timeouts or iteration limits to keep things on track).</p>
</li>
<li><p><strong>Maintain Oversight and Iterate:</strong> No matter which patterns you use, remember that <strong>autonomous agents require careful monitoring and refinement</strong>. Log the agent’s decisions and tool uses, and if possible, have it explain its chain-of-thought (or use a “transparency” pattern like ReAct that produces an explicit reasoning trace). This makes it easier to debug when the agent gets confused. Incorporate human-in-the-loop steps for critical junctures where a mistake would be costly. Test your agent thoroughly with diverse scenarios, and be prepared to adjust its prompts or add new tools/patterns if it encounters failure modes. As Anthropic’s engineers note, building an effective agent is an <strong>iterative process</strong> – use telemetry and feedback to continually improve the agent’s reasoning strategies and safety over time.</p>
</li>
</ul>
<p>By understanding and applying these design patterns, AI developers can <strong>create agents that are not only smarter, but also more reliable and easier to maintain</strong>. The landscape of agentic AI is rapidly evolving, but these patterns have emerged as <strong>fundamental building blocks</strong> for current industry implementations. Whether it’s a virtual assistant automating business workflows or a conversational agent conducting research, a solid grasp of agent design patterns will help ensure your AI <strong>acts intelligently and safely in pursuit of its goals</strong>.</p>
<h2 id="heading-references">References</h2>
<p><img src="https://services.bingapis.com/favicon?url=anthropic.com" alt="Favicon type" /></p>
<p><strong>Building Effective AI Agents \ Anthropic</strong></p>
<p><a target="_blank" href="http://anthropic.com">anthropic.com</a></p>
<p><img src="https://services.bingapis.com/favicon?url=docs.databricks.com" alt="Favicon type" /></p>
<p><strong>Agent system design patterns | Databricks on AWS</strong></p>
<p><a target="_blank" href="http://docs.databricks.com">docs.databricks.com</a></p>
<p><img src="https://services.bingapis.com/favicon?url=wollenlabs.com" alt="Favicon type" /></p>
<p><strong>Wollen Labs</strong></p>
<p><a target="_blank" href="http://wollenlabs.com">wollenlabs.com</a></p>
<p><img src="https://services.bingapis.com/favicon?url=agent-patterns.readthedocs.io" alt="Favicon type" /></p>
<p><strong>ReAct Agent Pattern — Agent Patterns 0.2.0 documentation</strong></p>
<p><a target="_blank" href="http://agent-patterns.readthedocs.io">agent-patterns.readthedocs.io</a></p>
<p><img src="https://services.bingapis.com/favicon?url=servicesground.com" alt="Favicon type" /></p>
<p><strong>Agentic Reasoning Patterns: 5 Powerful Frameworks for Smarter AI Agents</strong></p>
<p><a target="_blank" href="http://servicesground.com">servicesground.com</a></p>
<p><img src="https://services.bingapis.com/favicon?url=mlopscommunity.substack.com" alt="Favicon type" /></p>
<p><strong>Inside Claude Code: how Anthropic rethought coding with agents</strong></p>
<p><a target="_blank" href="http://mlopscommunity.substack.com">mlopscommunity.substack.com</a></p>
<p><img src="https://services.bingapis.com/favicon?url=agent-patterns.readthedocs.io" alt="Favicon type" /></p>
<p><strong>Agent Patterns Documentation — Agent Patterns 0.2.0 documentation</strong></p>
<p><a target="_blank" href="http://agent-patterns.readthedocs.io">agent-patterns.readthedocs.io</a></p>
<p><img src="https://services.bingapis.com/favicon?url=blogs.oracle.com" alt="Favicon type" /></p>
<p><strong>Comparing File Systems and Databases for Effective AI Agent Memory Management | developers</strong></p>
<p><a target="_blank" href="http://blogs.oracle.com">blogs.oracle.com</a></p>
]]></content:encoded></item><item><title><![CDATA[AI Agentic Design Pattern - Prompt Chaining]]></title><description><![CDATA[Overview
Prompt chaining (sometimes called the Pipeline pattern) is a powerful strategy for handling complex tasks with large language models (LLMs). Instead of relying on a single, monolithic prompt, prompt chaining breaks down a problem into a sequ...]]></description><link>https://tech.kingdavidconsulting.com/ai-agentic-design-pattern-prompt-chaining</link><guid isPermaLink="true">https://tech.kingdavidconsulting.com/ai-agentic-design-pattern-prompt-chaining</guid><category><![CDATA[Prompt Engineering]]></category><category><![CDATA[#PromptEngineering]]></category><category><![CDATA[agentic AI]]></category><category><![CDATA[design patterns]]></category><dc:creator><![CDATA[King David Consulting LLC]]></dc:creator><pubDate>Fri, 12 Dec 2025 19:27:26 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/xG8IQMqMITM/upload/c7680383f4cc7dd1357b534cde7b0a5b.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-overview">Overview</h2>
<p>Prompt chaining (sometimes called the Pipeline pattern) is a powerful strategy for handling complex tasks with large language models (LLMs). Instead of relying on a single, monolithic prompt, prompt chaining breaks down a problem into a sequence of smaller, focused steps. Each step is addressed individually, and the output from one prompt is passed as input to the next. This modular approach improves reliability, makes debugging easier, and enables integration with external tools and APIs.</p>
<hr />
<h2 id="heading-why-prompt-chaining">Why Prompt Chaining?</h2>
<ul>
<li><p><strong>Reduces cognitive load:</strong> Each step is simpler and less ambiguous, lowering the chance of errors and hallucinations.</p>
</li>
<li><p><strong>Improves reliability:</strong> Sequential decomposition allows for validation and correction at each stage.</p>
</li>
<li><p><strong>Enables tool integration:</strong> Each step can interact with external systems, APIs, or databases.</p>
</li>
<li><p><strong>Foundation for agentic systems:</strong> Enables multi-step reasoning, planning, and decision-making.</p>
</li>
</ul>
<hr />
<h2 id="heading-pattern-example-three-steps">Pattern Example (Three Steps)</h2>
<ol>
<li><p><strong>Summarize</strong> raw material with tight instructions.</p>
</li>
<li><p><strong>Extract</strong> structured data (JSON) from the summary.</p>
</li>
<li><p><strong>Compose</strong> human-ready content using the structured output.</p>
</li>
</ol>
<p>Assigning a distinct role to each step (e.g., Market Analyst, Trend Analyst, Documentation Writer) helps focus the model and improves output quality.</p>
<hr />
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765565601810/ce60da55-ca95-4cb5-8e07-f961c74242f0.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-c-code-sample-agent-framework-sequential-chain">C# Code Sample: Agent Framework Sequential Chain</h2>
<blockquote>
<p><strong>Prerequisites</strong></p>
<ul>
<li><p>.NET 8+</p>
</li>
<li><p>Azure OpenAI resource &amp; deployed model (e.g., <code>gpt-4o-mini</code>)</p>
</li>
<li><p>Sign in with <code>az login</code> or use an API key credential</p>
</li>
<li><p>NuGet packages (preview):</p>
<pre><code class="lang-plaintext">  dotnet add package Azure.AI.OpenAI --prerelease
  dotnet add package Azure.Identity
  dotnet add package Microsoft.Agents.AI.OpenAI --prerelease
</code></pre>
</li>
</ul>
</blockquote>
<pre><code class="lang-csharp"><span class="hljs-keyword">using</span> System;
<span class="hljs-keyword">using</span> System.Text.Json;
<span class="hljs-keyword">using</span> Azure.Identity;
<span class="hljs-keyword">using</span> Azure.AI.OpenAI;
<span class="hljs-keyword">using</span> Microsoft.Agents.AI;
<span class="hljs-keyword">using</span> Microsoft.Agents.AI.OpenAI;

<span class="hljs-keyword">class</span> <span class="hljs-title">Program</span>
{
    <span class="hljs-keyword">static</span> <span class="hljs-keyword">async</span> System.Threading.Tasks.<span class="hljs-function">Task <span class="hljs-title">Main</span>(<span class="hljs-params"></span>)</span>
    {
        <span class="hljs-keyword">var</span> endpoint = <span class="hljs-keyword">new</span> Uri(<span class="hljs-string">"https://&lt;your-azure-openai&gt;.openai.azure.com/"</span>);
        <span class="hljs-keyword">var</span> modelId = <span class="hljs-string">"&lt;your-deployment-or-model-id&gt;"</span>; <span class="hljs-comment">// e.g., gpt-4o-mini</span>
        <span class="hljs-keyword">var</span> client = <span class="hljs-keyword">new</span> AzureOpenAIClient(endpoint, <span class="hljs-keyword">new</span> AzureCliCredential());

        <span class="hljs-comment">// General-purpose agent; constrain behavior per step via instructions</span>
        AIAgent agent = client
            .GetChatClient(modelId)
            .CreateAIAgent(instructions:
                <span class="hljs-string">"You are a disciplined assistant. Follow the user's step-specific instructions exactly."</span>);

        <span class="hljs-comment">// Step 1: Summarize</span>
        <span class="hljs-keyword">var</span> source = <span class="hljs-string">@"The new laptop model features a 3.5 GHz octa-core CPU, 16GB RAM, and a 1TB NVMe SSD.
                       It targets power users, claims 12-hour battery life, and includes Wi-Fi 7."</span>;

        <span class="hljs-keyword">string</span> summary = <span class="hljs-keyword">await</span> agent.RunAsync(
            <span class="hljs-string">"ROLE: Market Analyst.\n"</span> +
            <span class="hljs-string">"TASK: Summarize the key findings in &lt;=120 words. Stay factual and concise.\n"</span> +
            <span class="hljs-string">"TEXT:\n"</span> + source);

        <span class="hljs-keyword">if</span> (<span class="hljs-keyword">string</span>.IsNullOrWhiteSpace(summary))
            <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> InvalidOperationException(<span class="hljs-string">"Step 1 produced an empty summary."</span>);

        <span class="hljs-comment">// Step 2: Extract trends as structured JSON</span>
        <span class="hljs-keyword">string</span> trendsRaw = <span class="hljs-keyword">await</span> agent.RunAsync(
            <span class="hljs-string">"ROLE: Trend Analyst.\n"</span> +
            <span class="hljs-string">"TASK: Return ONLY strict JSON. Extract 3 trends with 'name' and 'supportingData'.\n"</span> +
            <span class="hljs-string">"SCHEMA: { \"trends\": [{\"name\": string, \"supportingData\": string}] }\n"</span> +
            <span class="hljs-string">"INPUT:\n"</span> + summary);

        JsonDocument trendsDoc;
        <span class="hljs-keyword">try</span>
        {
            trendsDoc = JsonDocument.Parse(trendsRaw);
            _ = trendsDoc.RootElement.GetProperty(<span class="hljs-string">"trends"</span>);
        }
        <span class="hljs-keyword">catch</span> (Exception ex)
        {
            <span class="hljs-comment">// Corrective re-prompt</span>
            trendsRaw = <span class="hljs-keyword">await</span> agent.RunAsync(
                <span class="hljs-string">"The previous output was not valid JSON per schema.\n"</span> +
                <span class="hljs-string">"Return ONLY strict JSON with shape:\n"</span> +
                <span class="hljs-string">"{ \"trends\": [{\"name\": string, \"supportingData\": string}] }\n"</span> +
                <span class="hljs-string">"INPUT:\n"</span> + summary);
            trendsDoc = JsonDocument.Parse(trendsRaw);
        }

        <span class="hljs-keyword">var</span> trendsJson = trendsDoc.RootElement.GetProperty(<span class="hljs-string">"trends"</span>).ToString();

        <span class="hljs-comment">// Step 3: Compose email</span>
        <span class="hljs-keyword">string</span> email = <span class="hljs-keyword">await</span> agent.RunAsync(
            <span class="hljs-string">"ROLE: Expert Documentation Writer.\n"</span> +
            <span class="hljs-string">"TASK: Draft a concise email (&lt;=150 words) to the marketing team.\n"</span> +
            <span class="hljs-string">"Include: short intro, bullet list with trend names + supporting data, single CTA line.\n"</span> +
            <span class="hljs-string">"CONTEXT:\n"</span> +
            <span class="hljs-string">$"Summary:\n<span class="hljs-subst">{summary}</span>\n"</span> +
            <span class="hljs-string">$"Trends (JSON):\n<span class="hljs-subst">{trendsJson}</span>"</span>);

        Console.WriteLine(<span class="hljs-string">"\n--- Summary ---\n"</span> + summary);
        Console.WriteLine(<span class="hljs-string">"\n--- Trends (JSON) ---\n"</span> + trendsRaw);
        Console.WriteLine(<span class="hljs-string">"\n--- Email ---\n"</span> + email);
    }
}
</code></pre>
]]></content:encoded></item><item><title><![CDATA[Turning SEC Filings into AI Fuel: Inside Moedim.Edgar’s MCP-Powered Gateway to EDGAR]]></title><description><![CDATA[If you’ve ever tried to build an AI system that reasons about public companies, you’ve probably run into the same brick wall: the SEC’s EDGAR database. It’s a goldmine of corporate disclosures, yet it feels like it was designed for humans with patien...]]></description><link>https://tech.kingdavidconsulting.com/turning-sec-filings-into-ai-fuel-inside-moedimedgars-mcp-powered-gateway-to-edgar</link><guid isPermaLink="true">https://tech.kingdavidconsulting.com/turning-sec-filings-into-ai-fuel-inside-moedimedgars-mcp-powered-gateway-to-edgar</guid><category><![CDATA[sec.gov]]></category><category><![CDATA[edgar]]></category><category><![CDATA[edgar server]]></category><category><![CDATA[SEC EDGAR API]]></category><category><![CDATA[AI-ready SEC filings]]></category><category><![CDATA[MCP server for AI]]></category><category><![CDATA[Equity research tools]]></category><category><![CDATA[Model Context Protocol]]></category><category><![CDATA[#ai-tools]]></category><category><![CDATA[ai agents]]></category><dc:creator><![CDATA[King David Consulting LLC]]></dc:creator><pubDate>Sat, 22 Nov 2025 02:05:02 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1763777623956/6299d91b-b2a8-4d6b-ab3e-d196f13acb74.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If you’ve ever tried to build an AI system that reasons about public companies, you’ve probably run into the same brick wall: the SEC’s EDGAR database. It’s a goldmine of corporate disclosures, yet it feels like it was designed for humans with patience, not machines with tokens.</p>
<p><a target="_blank" href="https://github.com/kdcllc/Moedim.Edgar"><strong>Moedim.Edgar</strong></a> steps directly into that gap. It’s a modern C#/.NET library—and a companion MCP server—that turns raw EDGAR APIs into type-safe, async-friendly building blocks for AI agents and financial applications.</p>
<p>In other words: this repo (<a target="_blank" href="https://github.com/kdcllc/Moedim.Edgar"><code>https://github.com/kdcllc/Moedim.Edgar</code></a>) is about turning SEC filings into <em>AI-ready context</em>.</p>
<hr />
<h2 id="heading-from-legacy-feeds-to-ai-native-infrastructure"><strong>From Legacy Feeds to AI-Native Infrastructure</strong></h2>
<p>At its core, <code>[Moedim.Edgar](</code><a target="_blank" href="https://github.com/kdcllc/Moedim.Edgar"><code>https://github.com/kdcllc/Moedim.Edgar</code></a><code>)</code> is a .NET 8 library that wraps the SEC EDGAR APIs with:</p>
<ul>
<li><p>A modern <code>IHttpClientFactory</code>-based HTTP client</p>
</li>
<li><p>Strongly-typed C# models for EDGAR data structures</p>
</li>
<li><p>Full async/await support</p>
</li>
<li><p>Dependency injection extensions via <code>Microsoft.Extensions.DependencyInjection</code></p>
</li>
<li><p>Configurable options via a clean options pattern</p>
</li>
</ul>
<p>The goal is not just “yet another HTTP wrapper,” but a library that feels native in contemporary .NET backends and can be dropped into real systems: microservices, data pipelines, and—most interestingly—AI assistants.</p>
<p>The project structure reflects that intent:</p>
<ul>
<li><p><a target="_blank" href="https://github.com/kdcllc/Moedim.Edgar"><strong>Moedim.Edgar</strong></a> – the core EDGAR client: services, models, query types.</p>
</li>
<li><p><strong>Moedim.Edgar.Sample</strong> – a comprehensive sample app walking through all main services.</p>
</li>
<li><p><strong>Moedim.Edgar.Mcp</strong> – an MCP server that exposes EDGAR as tools for AI agents.</p>
</li>
</ul>
<p>That last project is where things shift from “API client” to “AI infrastructure.”</p>
<hr />
<h2 id="heading-why-edgar-matters-for-ai"><strong>Why EDGAR Matters for AI</strong></h2>
<p>If you’re building AI for:</p>
<ul>
<li><p>Equity research</p>
</li>
<li><p>Corporate credit analysis</p>
</li>
<li><p>Competitive intel</p>
</li>
<li><p>Regulatory/compliance workflows</p>
</li>
</ul>
<p>…you quickly realize you need structured, reliable access to:</p>
<ul>
<li><p>Company facts (revenues, assets, liabilities, etc.)</p>
</li>
<li><p>Specific financial concepts over time</p>
</li>
<li><p>Filings history and search by form types (10-K, 10-Q, 8-K, etc.)</p>
</li>
<li><p>Latest filings for monitoring and alerts</p>
</li>
<li><p>Filing-level details and document structures</p>
</li>
</ul>
<p><a target="_blank" href="https://github.com/kdcllc/Moedim.Edgar">Moedim.Edgar</a>’s models and services map almost exactly onto this mental model:</p>
<ul>
<li><p><strong>Company Lookup</strong> – resolve companies to CIKs and metadata.</p>
</li>
<li><p><strong>Company Facts</strong> – explore the full universe of reported facts.</p>
</li>
<li><p><strong>Company Concept</strong> – zoom in on a specific metric (e.g. revenue).</p>
</li>
<li><p><strong>Edgar Search</strong> – discovery of filings with flexible queries.</p>
</li>
<li><p><strong>Latest Filings</strong> – what just hit the tape.</p>
</li>
<li><p><strong>Filing Details</strong> – drill into specific submissions.</p>
</li>
</ul>
<p>The sample application ships as a guided tour: configuration, service usage, pagination, error handling, and output examples. It’s less a “hello world” and more a “here’s how you’d actually wire this into a research workflow.”</p>
<hr />
<h2 id="heading-mcp-giving-ai-agents-first-class-access-to-edgar"><strong>MCP: Giving AI Agents First-Class Access to EDGAR</strong></h2>
<p>The most forward-looking part of the repo is <strong>Moedim.Edgar.Mcp</strong>, an implementation of a Model Context Protocol server that exposes EDGAR as a toolset to AI assistants.</p>
<p>Instead of having your AI agent hallucinate SEC data or rely on brittle scraping, you define <em>tools</em> that:</p>
<ul>
<li><p>Look up companies</p>
</li>
<li><p>Pull company facts</p>
</li>
<li><p>Fetch concept-specific timeseries</p>
</li>
<li><p>Search filings</p>
</li>
<li><p>Retrieve filing details</p>
</li>
</ul>
<p>The MCP server is built on:</p>
<ul>
<li><p>The <strong>ModelContextProtocol</strong> C# SDK</p>
</li>
<li><p>The <a target="_blank" href="https://github.com/kdcllc/Moedim.Edgar"><strong>Moedim.Edgar</strong></a> library itself</p>
</li>
<li><p>The .NET Generic Host (<a target="_blank" href="http://Microsoft.Extensions.Hosting"><code>Microsoft.Extensions.Hosting</code></a>)</p>
</li>
</ul>
<p>The documentation calls out:</p>
<ul>
<li><p><strong>Self-contained binaries</strong> for Windows, macOS, and Linux (no runtime required).</p>
</li>
<li><p>Targeting <strong>.NET 10 SDK</strong> for development, but compiled into cross-platform, self-contained applications.</p>
</li>
<li><p>A structured <a target="_blank" href="http://TOOLS.md"><strong>TOOLS.md</strong></a> describing 13 tools grouped by domain:</p>
<ul>
<li><p>Company data tools</p>
</li>
<li><p>Filing search tools</p>
</li>
<li><p>Filing details tools</p>
</li>
<li><p>Configuration and usage examples</p>
</li>
<li><p>Common financial concepts and SEC forms</p>
</li>
</ul>
</li>
</ul>
<p>This is exactly the kind of pattern we’re seeing emerge across AI ecosystems: instead of “prompting the model to Google things,” you grant the model a <em>well-documented, typed interface</em> to critical data systems, and let it reason on top.</p>
<p><a target="_blank" href="https://github.com/kdcllc/Moedim.Edgar">Moedim.Edgar</a>.Mcp is that interface for EDGAR.</p>
<hr />
<h2 id="heading-design-choices-that-matter-to-ai-builders"><strong>Design Choices That Matter to AI Builders</strong></h2>
<p>Several technical choices in this repo are particularly relevant if you’re building AI-infused systems:</p>
<ol>
<li><p><strong>Type-Safe Financial Models</strong></p>
<p> By encoding EDGAR concepts in strongly-typed C# models, you get:</p>
<ul>
<li><p>Safer transformations into embeddings, feature vectors, or RAG documents.</p>
</li>
<li><p>Less room for silent shape mismatches when serializing/deserializing data for AI pipelines.</p>
</li>
<li><p>Clearer documentation and discoverable APIs directly from IDE tooling.</p>
</li>
</ul>
</li>
<li><p><strong>Async-First API Surface</strong></p>
<p> Fetching EDGAR data is inherently I/O-bound and often pagination-heavy. Full async support throughout means:</p>
<ul>
<li><p>You can build high-throughput ingestion services.</p>
</li>
<li><p>Agents can concurrently fetch multiple companies or filings without blocking threads.</p>
</li>
<li><p>You’re well-positioned to scale in cloud-native environments.</p>
</li>
</ul>
</li>
<li><p><strong>Dependency Injection as a First-Class Citizen</strong></p>
<p> Support for <code>Microsoft.Extensions.DependencyInjection</code> makes the EDGAR client feel like any other infrastructure dependency:</p>
<ul>
<li><p>Register the client and services once.</p>
</li>
<li><p>Inject into orchestrators, background workers, or tool handlers.</p>
</li>
<li><p>Swap or wrap services for testing or extended logic.</p>
</li>
</ul>
</li>
</ol>
<p>    This is especially key when bridging between a host (e.g., an MCP server or orchestration engine) and the low-level EDGAR API.</p>
<ol start="4">
<li><p><strong>Configuration via Options</strong></p>
<p> <code>SecEdgarOptions</code> centralizes configuration—user agent, rate limiting behavior, base URLs, etc.—so that:</p>
<ul>
<li><p>You can tune behavior per environment.</p>
</li>
<li><p>You can plug in secrets/configuration providers for regulated deployments.</p>
</li>
<li><p>You can adapt when the SEC inevitably adjusts its API boundaries.</p>
</li>
</ul>
</li>
</ol>
<hr />
<h2 id="heading-a-path-to-ai-native-financial-research"><strong>A Path to AI-Native Financial Research</strong></h2>
<p>Taken together, <a target="_blank" href="https://github.com/kdcllc/Moedim.Edgar">Moedim.Edgar</a> and <a target="_blank" href="https://github.com/kdcllc/Moedim.Edgar">Moedim.Edgar</a>.Mcp offer a compelling blueprint for turning a legacy data source into something AI-native:</p>
<ol>
<li><p><strong>Normalize the data source</strong> with a robust, typed client library.</p>
</li>
<li><p><strong>Provide a guided sample</strong> that demonstrates real-world usage patterns and edge cases.</p>
</li>
<li><p><strong>Expose an AI-friendly protocol layer</strong> (MCP) that lets assistants call into that library safely and reliably.</p>
</li>
<li><p><strong>Ship cross-platform binaries</strong> so teams can run the AI data services anywhere—locally, in containers, or on hosted environments.</p>
</li>
</ol>
<p>This is a pattern that can be replicated for:</p>
<ul>
<li><p>Other regulators (e.g., ESMA, FCA, local securities commissions)</p>
</li>
<li><p>Alternative data sources (shipping, satellite, ESG, credit)</p>
</li>
<li><p>Internal enterprise systems (ERP, CRM, risk engines)</p>
</li>
</ul>
<p><a target="_blank" href="https://github.com/kdcllc/Moedim.Edgar">Moedim.Edgar</a> just happens to tackle one of the most foundational public datasets in finance.</p>
<hr />
<h2 id="heading-why-this-matters-now"><strong>Why This Matters Now</strong></h2>
<p>As LLM-based copilots move from “answering questions” to <strong>powering workflows</strong>, they need deep, structured, and reliable access to domain data. For financial workflows, EDGAR is table stakes.</p>
<p>Projects like <a target="_blank" href="https://github.com/kdcllc/Moedim.Edgar">Moedim.Edgar</a> show how you can:</p>
<ul>
<li><p>Respect the underlying API and its constraints.</p>
</li>
<li><p>Wrap it in a developer-friendly, cloud-native .NET library.</p>
</li>
<li><p>Then layer on an AI-native protocol (MCP) that lets assistants plug directly into the data without brittle scraping or ad-hoc glue code.</p>
</li>
</ul>
<p>If you’re building AI systems that touch public companies, this repo is less a “nice utility” and more a <strong>reference architecture</strong> for how to turn a legacy data source into a first-class AI tool.</p>
<p>And if nothing else, it’s a reminder that sometimes the most impactful AI work isn’t in training bigger models—it’s in making the <em>right data</em> reliably available to the models we already have.</p>
]]></content:encoded></item><item><title><![CDATA[Migrate from AutoMapper to Moedim.Mapper 
Faster, Safer, Easier to Debug]]></title><description><![CDATA[If your .NET project relies on AutoMapper and you're starting to feel the pain of implicit mapping, runtime surprises, or costly allocations, it’s time to consider migrating to Moedim.Mapper. Moedim.Mapper gives you explicit, maintainable mappings wi...]]></description><link>https://tech.kingdavidconsulting.com/migrate-from-automapper-to-moedimmapper-faster-safer-easier-to-debug</link><guid isPermaLink="true">https://tech.kingdavidconsulting.com/migrate-from-automapper-to-moedimmapper-faster-safer-easier-to-debug</guid><category><![CDATA[AutoMapper, Moedim.Mapper, .NET, C#, object-mapping, DTOs, mapping, migration-tooling, migration, code-migration, source-generation, performance, debugging, maintainability, dependency-injection, DI, EF Core, ProjectTo, unit-testing, testing, NuGet, GitHub, open-source, code-refactor, mapping-best-practices]]></category><dc:creator><![CDATA[King David Consulting LLC]]></dc:creator><pubDate>Fri, 14 Nov 2025 02:05:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1763085829547/bd716c6e-14b6-42a9-b12d-1342317a6c5d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If your .NET project relies on AutoMapper and you're starting to feel the pain of implicit mapping, runtime surprises, or costly allocations, it’s time to consider migrating to Moedim.Mapper. Moedim.Mapper gives you explicit, maintainable mappings with modern .NET patterns — and it ships with tooling to make migration from AutoMapper straightforward.</p>
<p>This post explains why teams choose Moedim.Mapper, how it differs from AutoMapper, and provides a step‑by‑step migration walkthrough (including examples and migration tooling tips) so you can move confidently.</p>
<p>Repository: <a target="_blank" href="https://github.com/kdcllc/Moedim.Mapper">https://github.com/kdcllc/Moedim.Mapper</a></p>
<p>TL;DR</p>
<ul>
<li><p>Moedim.Mapper focuses on explicit, clear mapping code that’s easy to reason about and debug.</p>
</li>
<li><p>It reduces runtime surprises by favoring compile‑time safety and more direct code generation patterns.</p>
</li>
<li><p>The project includes migration tooling that helps convert AutoMapper Profiles into Moedim mapping code scaffolds.</p>
</li>
<li><p>This post gives quick before/after examples and a migration checklist so you can plan the transition.</p>
</li>
</ul>
<p>Why migrate? Pain points with AutoMapper</p>
<ul>
<li><p>Implicit mapping: AutoMapper’s convention-based mapping is convenient but can hide logic. When mappings come from conventions or profiles spread across the codebase, diagnosing unexpected values becomes harder.</p>
</li>
<li><p>Runtime surprises: Missing mappings or configuration mistakes often result in runtime behavior that can be subtle to track down.</p>
</li>
<li><p>Debugging: Tracing mapping logic in generated expression trees is difficult for many developers.</p>
</li>
<li><p>Performance: Depending on usage patterns, reflection and expression trees can add overhead (especially for high-throughput scenarios).</p>
</li>
</ul>
<p>What Moedim.Mapper brings</p>
<ul>
<li><p>Explicit mapping definitions that are readable and easy to debug.</p>
</li>
<li><p>A smaller mental model — mappings are explicit code rather than implicit conventions.</p>
</li>
<li><p>Tooling to help convert existing AutoMapper Profiles into Moedim mapping code.</p>
</li>
<li><p>Better productivity for teams that prefer to keep mapping logic visible and testable alongside the rest of the code.</p>
</li>
</ul>
<p>Quick comparison (conceptual)</p>
<ul>
<li><p>AutoMapper: Convention + Profiles -&gt; runtime expression tree / reflection mapping</p>
</li>
<li><p>Moedim.Mapper: Explicit definitions (and/or source-generated code) -&gt; direct, readable mapping code</p>
</li>
</ul>
<p>Quickstart — examples</p>
<ol>
<li>Example AutoMapper Profile (before)</li>
</ol>
<pre><code class="lang-csharp"><span class="hljs-keyword">using</span> AutoMapper;

<span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">CustomerProfile</span> : <span class="hljs-title">Profile</span>
{
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-title">CustomerProfile</span>(<span class="hljs-params"></span>)</span>
    {
        CreateMap&lt;CustomerEntity, CustomerDto&gt;()
            .ForMember(d =&gt; d.FullName, opt =&gt; opt.MapFrom(s =&gt; s.FirstName + <span class="hljs-string">" "</span> + s.LastName))
            .ForMember(d =&gt; d.IsActive, opt =&gt; opt.MapFrom(s =&gt; s.Status == Status.Active));
    }
}
</code></pre>
<ol start="2">
<li>Equivalent mapping with Moedim.Mapper (after)</li>
</ol>
<ul>
<li>Moedim.Mapper favors explicit converters/mappers which you can place next to your DTOs or in a dedicated mappings folder:</li>
</ul>
<pre><code class="lang-csharp"><span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">CustomerMapper</span> : <span class="hljs-title">IMapper</span>&lt;<span class="hljs-title">CustomerEntity</span>, <span class="hljs-title">CustomerDto</span>&gt;
{
    <span class="hljs-function"><span class="hljs-keyword">public</span> CustomerDto <span class="hljs-title">Map</span>(<span class="hljs-params">CustomerEntity s</span>)</span>
    {
        <span class="hljs-keyword">if</span> (s == <span class="hljs-literal">null</span>) <span class="hljs-keyword">return</span> <span class="hljs-literal">null</span>;
        <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> CustomerDto
        {
            Id = s.Id,
            FullName = <span class="hljs-string">$"<span class="hljs-subst">{s.FirstName}</span> <span class="hljs-subst">{s.LastName}</span>"</span>,
            Email = s.Email,
            IsActive = s.Status == Status.Active
        };
    }
}
</code></pre>
<ol start="3">
<li>Registering mappings with DI</li>
</ol>
<ul>
<li>With AutoMapper you typically add profiles:</li>
</ul>
<pre><code class="lang-csharp">services.AddAutoMapper(<span class="hljs-keyword">typeof</span>(CustomerProfile).Assembly);
</code></pre>
<ul>
<li>With Moedim.Mapper, registration is explicit and usually just registering mappers or allowing convention-based scanning:</li>
</ul>
<pre><code class="lang-csharp">services.AddScoped&lt;IMapper&lt;CustomerEntity, CustomerDto&gt;, CustomerMapper&gt;();
<span class="hljs-comment">// or if Moedim provides helper scanning:</span>
services.AddMoedimMappers(<span class="hljs-keyword">typeof</span>(CustomerMapper).Assembly);
</code></pre>
<p>Migration tool — make the move faster One reason teams delay migration is the perceived cost of rewriting all mapping profiles. To help, Moedim.Mapper includes migration tooling in the repository that:</p>
<ul>
<li><p>Scans AutoMapper Profile classes (and/or scans compiled assemblies)</p>
</li>
<li><p>Generates scaffolded mapping classes that follow Moedim.Mapper patterns</p>
</li>
<li><p>Preserves custom MapFrom expressions as inline assignments when possible</p>
</li>
<li><p>Emits TODO comments when conversion is ambiguous so you can inspect and finalize migration</p>
</li>
</ul>
<p>Example (high level) CLI usage</p>
<ul>
<li>Scan your project and emit scaffolded mappers to a folder (the exact CLI and arguments are in the repository's tools folder; adjust to what your checkout provides):</li>
</ul>
<pre><code class="lang-plaintext"># hypothetical example — check the repo for exact CLI usage
dotnet tool run moedim-migrator --source ./MyProject --out ./GeneratedMappers
</code></pre>
<p>What the migrator does for the earlier example</p>
<ul>
<li><p>Converts CreateMap/ForMember MapFrom expressions into explicit assignments.</p>
</li>
<li><p>For complex expressions that use external methods or value resolvers, the migrator emits equivalent inline code or a TODO to implement a helper.</p>
</li>
</ul>
<p>After running the migrator you'll get a set of ready-to-review mapping classes you can include in your project and refine.</p>
<p>Step-by-step migration plan</p>
<ol>
<li><p>Add Moedim.Mapper to your solution</p>
<ul>
<li><p>Install from NuGet (example):</p>
<pre><code class="lang-plaintext">  dotnet add package Moedim.Mapper
</code></pre>
</li>
</ul>
</li>
<li><p>Run the migration tool against the assembly that contains your AutoMapper profiles</p>
<ul>
<li>Review generated files in GeneratedMappers (or whichever output folder you chose)</li>
</ul>
</li>
<li><p>Add the generated mappers to your project and compile</p>
<ul>
<li>Fix any TODOs the migrator emitted (these are places where manual attention is required)</li>
</ul>
</li>
<li><p>Register mappers in DI (either manually or using a provided scanning helper)</p>
</li>
<li><p>Run your unit tests to verify behavior</p>
</li>
<li><p>Remove AutoMapper package and profiles once everything is validated</p>
</li>
</ol>
<p>Practical tips and patterns</p>
<ul>
<li><p>Migrate incrementally: Start with a bounded area (e.g., a feature or microservice). Migrate only the mappings touched by that feature.</p>
</li>
<li><p>Keep tests: If you have unit or integration tests that validate mapping output, keep them in place — they will provide quick feedback that the migration preserved behavior.</p>
</li>
<li><p>Use the generated scaffolds as a starting point — hand-tune complex mappings to be idiomatic and efficient.</p>
</li>
<li><p>Embrace explicitness: Where AutoMapper used magic and conventions, Moedim.Mapper asks you to be intentional. This pays off in maintainability.</p>
</li>
</ul>
<p>Advanced scenarios</p>
<ul>
<li><p>Collection mappings: Most mappers need to handle IEnumerable -&gt; List etc. Moedim.Mapper provides helpers (or simple loops) for these conversions. The migration tooling will scaffold collection conversions when it detects them.</p>
</li>
<li><p>Projection and IQueryable: If you relied on AutoMapper’s ProjectTo for EF Core projections, check Moedim.Mapper documentation for pattern alternatives (explicit projections or expression-based projections) and migrate gradually.</p>
</li>
<li><p>Conditional mapping: Translate AutoMapper Condition/PreCondition to explicit guards in the mapping method.</p>
</li>
</ul>
<p>Testing and verification</p>
<ul>
<li><p>Keep your mapping tests. For each migrated mapping, add a unit test that creates a source object and asserts the target object properties.</p>
</li>
<li><p>Add test coverage around corner cases (nulls, empty lists, special enums) — the explicit mapping code makes it easier to reason about and test those edge cases.</p>
</li>
</ul>
<p>Performance, debugging, and maintenance</p>
<ul>
<li><p>With explicit mapping code you can step through mapping logic in your debugger easily. This reduces time-to-diagnose when values are wrong.</p>
</li>
<li><p>The generated mapping code tends to be straightforward and optimized by the runtime; you eliminate hidden expression tree building or reflection at runtime in many cases.</p>
</li>
<li><p>Explicit mappings are easier to profile and inline small optimizations when they matter.</p>
</li>
</ul>
<p>FAQ Q: Will I lose productivity without AutoMapper’s conventions? A: You will lose some of the automatic "magic", but you gain predictability and explicit control. The migration tooling helps narrow the work by scaffolding the bulk of classes; from there, refining is usually minimal.</p>
<p>Q: Can I keep AutoMapper and Moedim.Mapper side-by-side during migration? A: Yes. Migrate mappings incrementally, and keep both libraries until you’ve validated parity. The migration plan above is designed for incremental transitions.</p>
<p>Q: Does Moedim.Mapper support the same advanced features as AutoMapper (value resolvers, converters, projection)? A: Moedim.Mapper focuses on clear and explicit mapping primitives. For advanced scenarios you typically implement small helper methods or converters; these are explicit and directly testable. Check the repository for guides and examples.</p>
<p>Call to action</p>
<ul>
<li><p>Try the migration on a small feature first. Run the migrator, review the output, and keep tests green.</p>
</li>
<li><p>Browse the repo: <a target="_blank" href="https://github.com/kdcllc/Moedim.Mapper">https://github.com/kdcllc/Moedim.Mapper</a></p>
</li>
<li><p>If you find missing scenarios or want smoother migration for your codebase patterns, open an issue or contribute a PR — your real-world mappings improve the tooling for everyone.</p>
</li>
</ul>
<p>Closing Migrating from AutoMapper to Moedim.Mapper is about trading some of AutoMapper’s convenience for clearer, debuggable, and maintainable mapping code. With provided migration tooling and a small, sensible plan, you can reduce runtime surprises, simplify debugging, and make mapping logic part of your application's readable, testable codebase.</p>
<p>Happy mapping — and if you'd like, I can:</p>
<ul>
<li><p>generate example migration output for one of your Profile classes,</p>
</li>
<li><p>produce a checklist specific to your repo structure,</p>
</li>
<li><p>or draft a PR to integrate generated mappers into your project. Just tell me which Profile (or file path) to convert.</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Simplifying Task Scheduling in .NET Core with CronScheduler.AspNetCore and Generative AI]]></title><description><![CDATA[In the world of software development, efficient task scheduling is crucial for building scalable and maintainable applications. If you’re working with .NET Core, you might have encountered the complexities of existing scheduling libraries like Quartz...]]></description><link>https://tech.kingdavidconsulting.com/simplifying-task-scheduling-in-net-core-with-cronscheduleraspnetcore-and-generative-ai</link><guid isPermaLink="true">https://tech.kingdavidconsulting.com/simplifying-task-scheduling-in-net-core-with-cronscheduleraspnetcore-and-generative-ai</guid><category><![CDATA[cronjob]]></category><category><![CDATA[dotnetcore]]></category><category><![CDATA[genai]]></category><dc:creator><![CDATA[King David Consulting LLC]]></dc:creator><pubDate>Wed, 04 Sep 2024 23:58:17 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1725494118414/266243e0-e815-4f27-a867-f7f72f4b040d.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the world of software development, efficient task scheduling is crucial for building scalable and maintainable applications. If you’re working with .NET Core, you might have encountered the complexities of existing scheduling libraries like Quartz. Enter <a target="_blank" href="https://github.com/kdcllc/CronScheduler.AspNetCore"><strong>CronScheduler.AspNetCore</strong></a> <strong>(</strong><a target="_blank" href="https://github.com/kdcllc/CronScheduler.AspNetCore">https://github.com/kdcllc/CronScheduler.AspNetCore</a>), a lightweight and easy-to-use library designed to simplify your scheduling needs.</p>
<h4 id="heading-what-is-cronscheduleraspnetcore">What is CronScheduler.AspNetCore?</h4>
<p>CronScheduler.AspNetCore is a library specifically designed for .NET Core applications, whether you’re using IHost or IWebHost. <a target="_blank" href="https://en.wikipedia.org/wiki/Generative_artificial_intelligence">It adheres to the KISS (Keep It Simple, Stupid) principle, making it a straightforward alternative to more complex schedulers</a>.</p>
<h4 id="heading-key-features">Key Features</h4>
<ol>
<li><p><strong>Lightweight and Easy-to-Use</strong>: Unlike other scheduling libraries, CronScheduler.AspNetCore is designed to be lightweight, ensuring that it doesn’t add unnecessary overhead to your application.</p>
</li>
<li><p><strong>Cron Syntax</strong>: The library uses cron syntax for scheduling tasks, making it familiar and easy to use for those who have worked with cron jobs before.</p>
</li>
<li><p><strong>Async Initialization</strong>: With the IStartupJob feature, you can initialize critical processes asynchronously before the host starts, ensuring your application is ready to go even in complex environments like Kubernetes.</p>
</li>
<li><p><strong>Flexible Hosting</strong>: Whether you’re hosting your application in AspNetCore or using a generic IHost, CronScheduler.AspNetCore has you covered.</p>
</li>
</ol>
<h4 id="heading-integrating-generative-ai">Integrating Generative AI</h4>
<p><a target="_blank" href="https://en.wikipedia.org/wiki/Generative_artificial_intelligence">Generative AI, a branch of artificial intelligence capable of creating new content such as text, images, and code, can further enhance the capabilities of CronScheduler.AspNetCore</a>. By leveraging generative AI, you can automate the creation of complex scheduling configurations, generate dynamic task parameters, and even predict optimal scheduling times based on historical data.</p>
<h5 id="heading-example-use-case-dynamic-task-generation">Example Use Case: Dynamic Task Generation</h5>
<p>Imagine you need to schedule tasks that vary based on user behavior or external data. Generative AI can analyze patterns and generate appropriate cron expressions dynamically. Here’s a conceptual example:</p>
<p><strong>C#</strong></p>
<pre><code class="lang-csharp"><span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">DynamicCronJob</span> : <span class="hljs-title">ICronJob</span>
{
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> IGenerativeAIService _aiService;

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-title">DynamicCronJob</span>(<span class="hljs-params">IGenerativeAIService aiService</span>)</span>
    {
        _aiService = aiService;
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">ExecuteAsync</span>(<span class="hljs-params">CancellationToken cancellationToken</span>)</span>
    {
        <span class="hljs-keyword">var</span> cronExpression = <span class="hljs-keyword">await</span> _aiService.GenerateCronExpressionAsync();
        <span class="hljs-comment">// Schedule the task based on the generated cron expression</span>
    }
}

<span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">Startup</span>
{
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">ConfigureServices</span>(<span class="hljs-params">IServiceCollection services</span>)</span>
    {
        services.AddSingleton&lt;IGenerativeAIService, GenerativeAIService&gt;();
        services.AddCronScheduler(config =&gt;
        {
            config.AddJob&lt;DynamicCronJob&gt;(<span class="hljs-string">"*/5 * * * * *"</span>); <span class="hljs-comment">// Initial placeholder</span>
        });
    }
}
</code></pre>
<p>In this example, <code>DynamicCronJob</code> uses a generative AI service to create cron expressions dynamically, allowing for more flexible and adaptive scheduling.</p>
<h4 id="heading-getting-started">Getting Started</h4>
<p>To get started with CronScheduler.AspNetCore, you need to install the appropriate package. For AspNetCore hosting, use the following command:</p>
<pre><code class="lang-bash">dotnet add package CronScheduler.AspNetCore
</code></pre>
<p>For IHost hosting, use:</p>
<pre><code class="lang-bash">dotnet add package CronScheduler.Extensions
</code></pre>
<h4 id="heading-conclusion">Conclusion</h4>
<p><a target="_blank" href="https://github.com/kdcllc/CronScheduler.AspNetCore">CronScheduler.AspNetCore</a> is a powerful yet simple tool for managing scheduled tasks in .NET Core applications. Its lightweight nature and ease of use make it an excellent choice for developers looking to streamline their task scheduling processes. By integrating generative AI, you can take your scheduling capabilities to the next level, making your applications more adaptive and intelligent.</p>
<p>Give it a try and simplify your scheduling today!</p>
]]></content:encoded></item><item><title><![CDATA[Streamline Your Dev Workflow with Azure AI CLI and AI Agents: Boost Productivity and Cut Costs]]></title><description><![CDATA[Why You Should Use Azure AI CLI: Streamlining Your Dev Workflow
As a developer, you're always looking for ways to optimize your workflow, reduce costs, and increase productivity. That's where the Azure AI CLI comes in – a powerful tool that enables y...]]></description><link>https://tech.kingdavidconsulting.com/streamline-your-dev-workflow-with-azure-ai-cli-and-ai-agents-boost-productivity-and-cut-costs</link><guid isPermaLink="true">https://tech.kingdavidconsulting.com/streamline-your-dev-workflow-with-azure-ai-cli-and-ai-agents-boost-productivity-and-cut-costs</guid><category><![CDATA[ai agents]]></category><category><![CDATA[Azure]]></category><category><![CDATA[azure ai services]]></category><category><![CDATA[semantic kernel]]></category><category><![CDATA[generative ai]]></category><category><![CDATA[open ai]]></category><category><![CDATA[Azure OpenAI]]></category><category><![CDATA[Microsoft, Azure OpenAI]]></category><dc:creator><![CDATA[King David Consulting LLC]]></dc:creator><pubDate>Wed, 31 Jul 2024 16:55:12 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1722444570515/4790e949-046f-4a64-b041-326b9d09ae8a.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Why You Should Use Azure AI CLI: Streamlining Your Dev Workflow</strong></p>
<p>As a developer, you're always looking for ways to optimize your workflow, reduce costs, and increase productivity. That's where the Azure AI CLI comes in – a powerful tool that enables you to seamlessly integrate Azure AI services into your development process.</p>
<p>In this post, we'll explore the benefits of using the Azure AI CLI and walk you through a simple project setup for Semantic Kernel and Agents.</p>
<p><strong>What is Azure AI CLI?</strong></p>
<p>The Azure AI CLI is a cross-platform command-line tool that allows you to connect and use Azure AI services without writing code. With just a few commands, you can access a wide range of AI capabilities, including natural language processing (NLP), computer vision, and more.</p>
<p>REPO: <a target="_blank" href="https://github.com/Azure/azure-ai-cli">https://github.com/Azure/azure-ai-cli</a></p>
<p><strong>Advantages of Using Azure AI CLI</strong></p>
<ol>
<li><p><strong>Streamlined Workflow</strong>: The Azure AI CLI simplifies your development workflow by providing a single interface to all Azure AI services. No more switching between different tools or services – just a few commands to get you started.</p>
</li>
<li><p><strong>Increased Productivity</strong>: With the Azure AI CLI, you can automate repetitive tasks and focus on higher-level creative work. This means you'll be able to deliver projects faster and with greater accuracy.</p>
</li>
<li><p><strong>Cost-Effective</strong>: The Azure AI CLI allows you to pay only for what you use, reducing costs and minimizing waste. Plus, you can scale up or down as needed, without worrying about upfront costs.</p>
</li>
<li><p><strong>Access to Advanced AI Capabilities</strong>: The Azure AI CLI gives you access to a wide range of advanced AI capabilities, including NLP, computer vision, and more. This means you can create more sophisticated projects and applications that truly impress.</p>
</li>
</ol>
<p><strong>Getting Started with the Azure AI CLI</strong></p>
<p>Ready to start using the Azure AI CLI? Here's a simple project setup for Semantic Kernel and Agents:</p>
<hr />
<pre><code class="lang-bash">    <span class="hljs-comment"># 1. install dotnet core</span>
    <span class="hljs-comment"># https://dotnet.microsoft.com/en-us/download</span>

    <span class="hljs-comment"># 2. create manifest file for the project</span>
    dotnet new tool-manifest

    <span class="hljs-comment"># 3. install azure ai cli</span>
    dotnet tool install Azure.AI.CLI --prerelease

    <span class="hljs-comment"># 4. init the tooling, this step will cache credentials locally</span>
    dotnet ai init

    <span class="hljs-comment"># 5. list all templates available</span>
    dotnet ai dev new list

    <span class="hljs-comment"># 6. create this template </span>
    dotnet ai dev new sk-chat-with-agents

    <span class="hljs-comment"># 7. run this template with all of the env vars loaded, thanks to azure ai cli</span>
    dotnet ai dev shell --run <span class="hljs-string">"dotnet run"</span>
</code></pre>
<p>By following these steps, you'll be able to set up a simple project for Semantic Kernel and Agents using the Azure AI CLI.</p>
<p>The following is the SK Agents at work:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722444547262/6243a7aa-d57f-4d4a-8b3d-41d82de0c9a5.png" alt class="image--center mx-auto" /></p>
<p><strong>Conclusion</strong></p>
<p>The Azure AI CLI is a powerful tool that can help streamline your development workflow, increase productivity, and reduce costs. With its advanced AI capabilities and simplified interface, it's an essential tool for any developer looking to take their projects to the next level.</p>
<p>In this post, we've explored the benefits of using the Azure AI CLI and walked you through a simple project setup for Semantic Kernel and Agents. Whether you're just starting out or looking to expand your skills, the Azure AI CLI is an excellent choice for anyone interested in developing with AI.</p>
<p>Completed code can be found at: <a target="_blank" href="https://github.com/kdcllc/generative-ai/blob/master/dotnet/agents/sk-chat-with-agents/README.md">https://github.com/kdcllc/generative-ai</a></p>
]]></content:encoded></item><item><title><![CDATA[Scheduling Generative AI Jobs Inside Azure Container Apps]]></title><description><![CDATA[As you build a generative AI application, you might need to schedule jobs to perform tasks such as data processing, model training, or output generation. In this blog post, we'll explore how to use the CronScheduler.AspNetCore library to schedule job...]]></description><link>https://tech.kingdavidconsulting.com/scheduling-generative-ai-jobs-inside-azure-container-apps</link><guid isPermaLink="true">https://tech.kingdavidconsulting.com/scheduling-generative-ai-jobs-inside-azure-container-apps</guid><category><![CDATA[generative ai]]></category><category><![CDATA[Generative AI, OpenAI, Azure OpenAI, LLaMA-2, PaLM API, Vertex AI, DALL-E, ChatGPT, Whisper]]></category><category><![CDATA[cronjob]]></category><category><![CDATA[asp.net core]]></category><dc:creator><![CDATA[King David Consulting LLC]]></dc:creator><pubDate>Sat, 27 Jul 2024 01:56:49 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1722045343227/1297ec8e-0746-4091-ae07-b71c9cc21ce4.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As you build a generative AI application, you might need to schedule jobs to perform tasks such as data processing, model training, or output generation. In this blog post, we'll explore how to use the <a target="_blank" href="https://github.com/kdcllc/CronScheduler.AspNetCore">CronScheduler.AspNetCore</a> library to schedule jobs inside an Azure Container App.</p>
<p><strong>What is</strong> <a target="_blank" href="https://github.com/kdcllc/CronScheduler.AspNetCore/tree/master/src/CronSchedulerApp">CronSchedulerApp</a><strong>?</strong></p>
<p><a target="_blank" href="https://github.com/kdcllc/CronScheduler.AspNetCore/tree/master/src/CronSchedulerApp">CronSchedulerApp</a> is a .NET Core web application that demonstrates various scheduled job scenarios using the <a target="_blank" href="https://github.com/kdcllc/CronScheduler.AspNetCore">CronScheduler.AspNetCore</a> library. It includes examples of background tasks, startup jobs, and scheduled jobs that can be used in your own applications.</p>
<p><strong>Benefits of Using Scheduled Jobs</strong></p>
<p>Scheduled jobs offer several benefits for your generative AI application:</p>
<ul>
<li><p><strong>Efficient processing</strong>: By scheduling jobs to run at specific times or intervals, you can ensure that computationally intensive tasks are executed during off-peak hours or when system resources are available.</p>
</li>
<li><p><strong>Improved reliability</strong>: Scheduled jobs can be configured to retry failed tasks or execute alternative tasks if the primary job fails.</p>
</li>
<li><p><strong>Enhanced scalability</strong>: Azure Container Apps provide a scalable platform for your application. By scheduling jobs, you can take advantage of this scalability and ensure that your application can handle increased traffic or processing demands.</p>
</li>
</ul>
<p><strong>Getting Started with</strong> <a target="_blank" href="https://github.com/kdcllc/CronScheduler.AspNetCore/tree/master/src/CronSchedulerApp">CronSchedulerApp</a></p>
<p>To get started with <a target="_blank" href="https://github.com/kdcllc/CronScheduler.AspNetCore/tree/master/src/CronSchedulerApp">CronSchedulerApp</a>, follow these steps:</p>
<ol>
<li><p><strong>Install the NuGet package</strong>: Install the <code>CronScheduler.AspNetCore</code> NuGet package in your .NET Core project.</p>
</li>
<li><p><strong>Configure the scheduler</strong>: In your <code>Program.cs</code> , configure the scheduler using the <code>AddScheduler</code> method. This method allows you to add jobs and customize job options, such as the run interval, maximum retries, and error handling.</p>
</li>
</ol>
<p><strong>Example Job: TorahQuoteJob</strong></p>
<p>Let's take a look at an example job, <code>TorahQuoteJob</code>, which retrieves a random verse from the Torah and updates the current verses in the <code>TorahVerses</code> service:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">TorahQuoteJob</span> : <span class="hljs-title">IScheduledJob</span>
{
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> ILogger&lt;TorahQuoteJob&gt; _logger;
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> ITorahVersesService _torahVersesService;

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-title">TorahQuoteJob</span>(<span class="hljs-params">ILogger&lt;TorahQuoteJob&gt; logger, ITorahVersesService torahVersesService</span>)</span>
    {
        _logger = logger;
        _torahVersesService = torahVersesService;
    }

    <span class="hljs-keyword">public</span> <span class="hljs-keyword">string</span> Name { <span class="hljs-keyword">get</span>; } = <span class="hljs-keyword">nameof</span>(TorahQuoteJob);

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">ExecuteAsync</span>(<span class="hljs-params">CancellationToken cancellationToken</span>)</span>
    {
        _logger.LogInformation(<span class="hljs-string">"Executing scheduled job: {jobName}"</span>, Name);
        <span class="hljs-comment">// Retrieve a random verse from the Torah</span>
        <span class="hljs-keyword">var</span> verse = <span class="hljs-keyword">await</span> _torahVersesService.GetRandomVerse();
        <span class="hljs-comment">// Update the current verses in the `TorahVerses` service</span>
        <span class="hljs-keyword">await</span> _torahVersesService.UpdateCurrentVerses(verse);
        <span class="hljs-keyword">await</span> Task.CompletedTask;
    }
}
</code></pre>
<p><strong>Configuring Job Options</strong></p>
<p>You can configure job options using a custom class that inherits from <code>SchedulerOptions</code>. For example:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">TorahQuoteJobOptions</span> : <span class="hljs-title">SchedulerOptions</span>
{
    <span class="hljs-keyword">public</span> <span class="hljs-keyword">string</span> SomeOption { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; } = <span class="hljs-keyword">string</span>.Empty;
}
</code></pre>
<p><strong>Registering the Job</strong></p>
<p>To register the <code>TorahQuoteJob</code> with the scheduler, use the following code:</p>
<pre><code class="lang-csharp">builder.Services.AddScheduler(builder =&gt;
{
    builder.Services.AddSingleton&lt;TorahVerses&gt;();
    builder.Services
        .AddHttpClient&lt;TorahService&gt;()
        .AddTransientHttpErrorPolicy(p =&gt; p.RetryAsync());

    builder.AddJob&lt;TorahQuoteJob, TorahQuoteJobOptions&gt;();
    builder.Services.AddScoped&lt;UserService&gt;();
    builder.AddJob&lt;UserJob, UserJobOptions&gt;();

    builder.AddUnobservedTaskExceptionHandler(sp =&gt;
    {
        <span class="hljs-keyword">var</span> logger = sp.GetRequiredService&lt;ILoggerFactory&gt;().CreateLogger(<span class="hljs-string">"CronJobs"</span>);
        <span class="hljs-keyword">return</span> (sender, args) =&gt;
        {
            logger?.LogError(args.Exception?.Message);
            args.SetObserved();
        };
    });
});
</code></pre>
<p><strong>Running the Application</strong></p>
<p>To run your application with scheduled jobs, follow these steps:</p>
<ol>
<li><p><strong>Build and deploy</strong>: Build and deploy your .NET Core project to Azure Container Apps.</p>
</li>
<li><p><strong>Configure the scheduler</strong>: Configure the scheduler using the <code>AddScheduler</code> method in your <code>Program.cs</code> .</p>
</li>
<li><p><strong>Run the application</strong>: Run your application to execute scheduled jobs based on their configured schedules.</p>
</li>
</ol>
<p>In this blog post, we've explored how to use <a target="_blank" href="https://github.com/kdcllc/CronScheduler.AspNetCore">CronScheduler.AspNetCore</a> library with <a target="_blank" href="https://github.com/kdcllc/CronScheduler.AspNetCore/tree/master/src/CronSchedulerApp">CronSchedulerApp</a> to schedule generative AI jobs inside Azure Container Apps. By leveraging scheduled jobs, you can improve the efficiency, reliability, and scalability of your application.</p>
<p>Happy coding..</p>
]]></content:encoded></item><item><title><![CDATA[Deploying RavenDB Accelerator to Azure Container Apps using azd up]]></title><description><![CDATA[In our previous blog post, we explored why RavenDB is the best option for new generative AI projects. As we discussed, RavenDB provides a robust database solution that can handle complex data structures and vast amounts of information required for th...]]></description><link>https://tech.kingdavidconsulting.com/deploying-ravendb-accelerator-to-azure-container-apps-using-azd-up</link><guid isPermaLink="true">https://tech.kingdavidconsulting.com/deploying-ravendb-accelerator-to-azure-container-apps-using-azd-up</guid><category><![CDATA[RavenDB]]></category><category><![CDATA[generative ai]]></category><category><![CDATA[azure-container-apps]]></category><dc:creator><![CDATA[King David Consulting LLC]]></dc:creator><pubDate>Thu, 25 Jul 2024 20:57:30 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1721940627481/f1ea0b46-b181-44a9-ba96-9e6289278443.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In our previous blog post, we explored why RavenDB is the best option for new generative AI projects. As we discussed, RavenDB provides a robust database solution that can handle complex data structures and vast amounts of information required for these innovative applications.</p>
<p>Today, we'll take it to the next level by deploying RavenDB Accelerator to Azure Container Apps using <code>azd up</code>. This accelerator is designed specifically for Proof of Concept (POC) projects, making it easy to set up a RavenDB-backed <a target="_blank" href="http://ASP.NET">ASP.NET</a> Core application and deploy it to Azure.</p>
<p><strong>Prerequisites</strong></p>
<p>Before we dive into the deployment process, make sure you have the following prerequisites:</p>
<ul>
<li><p><a target="_blank" href="https://dotnet.microsoft.com/download/dotnet/8.0">.NET SDK 8.0</a></p>
</li>
<li><p><a target="_blank" href="https://www.docker.com/get-started">Docker</a></p>
</li>
<li><p><a target="_blank" href="https://docs.microsoft.com/en-us/cli/azure/install-azure-cli">Azure CLI</a></p>
</li>
<li><p><a target="_blank" href="https://learn.microsoft.com/en-us/azure/developer/azure-developer-cli/install-azd">Azure Developer CLI (azd)</a></p>
</li>
</ul>
<p><strong>Deploying using azd up</strong></p>
<p>To deploy the RavenDB Accelerator to Azure Container Apps, follow these steps:</p>
<ol>
<li><p><strong>Install Azure Developer CLI:</strong></p>
<p> Download and install <code>azd cli</code> (Azure Developer CLI) from the official documentation.</p>
</li>
<li><p><strong>Clone the repo:</strong></p>
<pre><code class="lang-bash"> git <span class="hljs-built_in">clone</span> https://github.com/kdcllc/ravendb-donet-accelerator.git
</code></pre>
</li>
<li><p><strong>Run azd up:</strong></p>
<p> Navigate to the root directory of your project and run the command:</p>
<pre><code class="lang-bash"> azd up
</code></pre>
</li>
<li><p><strong>Follow the steps:</strong></p>
<p> Follow the prompts to create Azure resources, such as a resource group, log analytics workspace, application insights, container registry, key vault, storage account, and container apps environment.</p>
</li>
</ol>
<h3 id="heading-post-deployment-steps-to-secure-ravendb">Post deployment steps to secure RavenDb</h3>
<p>After deploying the accelerator, navigate to the Ingress section of <code>ravendb</code> Azure Container App in the Azure Container Environment. You'll need to:</p>
<ol>
<li><p>Select -&gt; 'Allow traffic from IPs configured below, deny all other traffic'</p>
</li>
<li><p>Click -&gt; 'Add the app's outbound IP address'</p>
</li>
<li><p>Add any other IP addresses you want to have access to RavenDb.</p>
</li>
</ol>
<p><a target="_blank" href="https://raw.githubusercontent.com/kdcllc/ravendb-donet-accelerator/master/images/ip-restrictions-mode.png"><img src="https://raw.githubusercontent.com/kdcllc/ravendb-donet-accelerator/master/images/ip-restrictions-mode.png" alt="ip restriction mode" /></a></p>
<p><strong>Conclusion</strong></p>
<p>In this post, we demonstrated how to deploy RavenDB Accelerator to Azure Container Apps using <code>azd up</code>. This accelerator is designed for Proof of Concept (POC) projects and provides a complete solution for building and deploying <a target="_blank" href="http://ASP.NET">ASP.NET</a> Core applications with RavenDB. By following these steps, you can quickly set up a robust database solution for your generative AI project.</p>
<p>Whether you're working on a chatbot that can generate human-like responses or an image generator that can create stunning artwork, RavenDB provides the foundation you need to bring your vision to life. So why wait? Get started with RavenDB today and unlock the full potential of your generative AI project!</p>
]]></content:encoded></item><item><title><![CDATA[Unleashing the Power of Generative AI: Why RavenDB is Your Ultimate Database Solution]]></title><description><![CDATA[The world of artificial intelligence has been abuzz with the recent advancements in generative AI, particularly with the emergence of chatbots like ChatGPT. As these technologies continue to evolve and push the boundaries of what's possible, it's ess...]]></description><link>https://tech.kingdavidconsulting.com/unleashing-the-power-of-generative-ai-why-ravendb-is-your-ultimate-database-solution</link><guid isPermaLink="true">https://tech.kingdavidconsulting.com/unleashing-the-power-of-generative-ai-why-ravendb-is-your-ultimate-database-solution</guid><category><![CDATA[RavenDB]]></category><category><![CDATA[generative ai]]></category><dc:creator><![CDATA[King David Consulting LLC]]></dc:creator><pubDate>Thu, 25 Jul 2024 20:42:06 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1721938870912/9c0cb8fc-ad95-4850-a1f5-99e75b85b425.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The world of artificial intelligence has been abuzz with the recent advancements in generative AI, particularly with the emergence of chatbots like ChatGPT. As these technologies continue to evolve and push the boundaries of what's possible, it's essential to have a robust database that can handle the complex data structures and vast amounts of information required for their development.</p>
<p>In this blog post, we'll explore why RavenDB is the best option for new generative AI projects. We'll delve into the unique features of RavenDB that make it an ideal choice for building and deploying these innovative applications.</p>
<p><strong>Why a NoSQL Database?</strong></p>
<p>Before diving into the specifics of RavenDB, let's first consider why a NoSQL database is essential for generative AI projects. Unlike traditional relational databases, NoSQL databases are designed to handle large amounts of unstructured or semi-structured data. This flexibility is crucial for generative AI applications, which often rely on vast amounts of text, images, and other forms of unstructured data.</p>
<p><strong>RavenDB: A Perfect Fit for Generative AI</strong></p>
<p>So, why RavenDB specifically? Here are a few compelling reasons:</p>
<ol>
<li><p><strong>JSON Document-Oriented Database</strong>: RavenDB is built around JSON documents, making it an ideal fit for handling the complex data structures required by generative AI applications.</p>
</li>
<li><p><strong>Schema-Less</strong>: Unlike traditional databases, RavenDB doesn't require a predefined schema. This allows developers to add new fields or modify existing ones without having to worry about complex schema updates.</p>
</li>
<li><p><strong>High-Performance</strong>: RavenDB is designed for high-performance and scalability, making it an excellent choice for large-scale generative AI projects that require fast data processing and retrieval.</p>
</li>
<li><p><strong>ACID Compliance</strong>: RavenDB ensures ACID compliance, guaranteeing that database transactions are processed reliably and securely.</p>
</li>
</ol>
<p><strong>Why RavenDB Beats the Competition</strong></p>
<p>While there are other NoSQL databases available, RavenDB stands out from the competition in several key areas:</p>
<ol>
<li><p><strong>Ease of Use</strong>: RavenDB has a relatively low barrier to entry, making it easier for developers to get started with their generative AI projects.</p>
</li>
<li><p><strong>Flexibility</strong>: RavenDB's flexible data model allows developers to adapt their database schema as their project evolves.</p>
</li>
<li><p><strong>Scalability</strong>: RavenDB is designed for scalability, ensuring that your generative AI application can handle increasing amounts of data and user traffic.</p>
</li>
</ol>
<p><strong>Conclusion</strong></p>
<p>As the field of generative AI continues to expand, having a reliable and scalable database like RavenDB is crucial for building and deploying innovative applications. With its JSON document-oriented design, schema-less approach, high-performance capabilities, and ACID compliance, RavenDB is the perfect choice for new generative AI projects.</p>
<p>Whether you're working on a chatbot that can generate human-like responses or an image generator that can create stunning artwork, RavenDB provides the foundation you need to bring your vision to life. So why wait? Get started with RavenDB today and unlock the full potential of your generative AI project!</p>
]]></content:encoded></item></channel></rss>