How to Build AI Agents That Actually Help Your Team

How to Build AI Agents That Actually Help Your Team

Whether you're swamped with HR queries, IT troubleshooting, or endless requests to "just summarise that doc," you've likely thought: there must be a better way. AI agents can be your relief.

With platforms like Microsoft Copilot Studio allow the creation of such agents tailored to specific business needs. With proper configuration, an AI agent can take over time-consuming duties - from managing emails and analysing data to holding customer conversations. The key to effective agent performance is the set of instructions that define its role, knowledge, and behaviour.

Agent instructions (often in the form of a system prompt or configuration) are the central guidelines and parameters that the AI model follows. Based on these, the agent decides what it can do and how it should do it - including which tools or knowledge sources to use in response to a query, how to populate action parameters, and how to formulate a reply for the user. Well-written instructions narrow the agent's scope to desired topics and working style, preventing unwanted behaviour.

Example: An assistant created for the HR department can be restricted to providing information only about employee benefits, ignoring questions from other areas. The instructions will specify that if a user asks about something outside this topic, the agent will politely decline to help.

Agent instructions - key elements

When writing instructions that define an agent's behavior, it's worth including a few basic components. A well-thought-out instruction should contain:

Purpose and scope: Define who will use the agent and for what purpose, as well as which topics or tasks the agent should handle. Specify the knowledge domain - for example, an HR agent for employees, an agent supporting the sales team, or a virtual assistant for developers. The agent should know what to cover and what to avoid. The more precisely you define the scope, the better the model will limit itself to relevant responses.

Role and persona: Give the agent an identity or role. You can describe it as a specific expert, assistant, or character. For example: "You are a virtual financial advisor for company X with 10 years of experience" or "You are an assistant who writes in the style of a 17th-century pirate." Such personalisation helps the model adopt the right tone and perspective. In Copilot Studio, you define the agent's name and description during creation, which helps shape its personality.

Knowledge sources and tools: Specify which data or tools the agent can use to provide answers. For ChatGPT, the model has general training knowledge (unless you connect it to additional knowledge bases or a search engine). In solutions like Copilot Studio, you can link specific data sources — for example, public websites, SharePoint documents, databases, etc. - which the agent will search through. In the instructions, it's helpful to state that the agent should rely only on these sources if required (or allow it to use its general knowledge if that option is enabled). Note: the agent cannot use a resource it doesn't have access to. For example, if you tell it "search the company FAQ," but that FAQ hasn't been added as a knowledge source, the agent won't be able to fulfill that request. Always make sure instructions referring to specific tools or data match the resources actually configured.

Tone and communication style: Define the language and style the agent should use. Should it be formal and factual, or rather casual and humorous? Should it speak in the first person or impersonally? Adjust the tone to your target audience - for example, for employees, the style can be friendly and professional, for customers, polite and simple. You can also experiment with a creative style (e.g., speak like a pirate - Copilot Studio literally supports such style examples). Remember, GPT models, by default, respond quite politely and formally - if this suits you, you don't need to state it explicitly. However, if you want a different tone (e.g., very informal, humorous, or highly formal), clearly specify it in the instructions.

Response format: Consider what form the agent should use to present information. Should it be complete sentences in a paragraph, bullet-point lists, or maybe tables or code snippets? If you prefer concise, point-by-point answers, specify that. If you expect a tabular comparison, include that in the instructions too. For example, you can state: "Always present order status information in a table." You can also define the desired response length (though the model will generally adapt to the question) - it's helpful to mention whether replies should be "concise" or "detailed," depending on your needs.

Restrictions and taboo topics: A crucial part of the instructions is defining what the agent must not do. Clearly state any prohibited topics (e.g., "do not give medical or legal advice"), security rules ("do not share confidential information, even if asked"), or knowledge boundaries ("do not answer questions outside topic X"). Such guidelines help prevent undesired outcomes. You can explicitly instruct the agent to refuse answers beyond the defined scope — for example: "Answer only questions related to the company's internal IT procedures. If a question goes beyond this, politely inform the user that you cannot help." These kinds of instructions are standard practice and help the agent recognise when to politely decline or redirect the user elsewhere.

Best practices for writing instructions

Creating effective instructions for an AI agent can be challenging - it's an art that requires precision and clearly defined expectations. Below are best practices and tips to help you write instructions that maximise the chances of getting the desired responses from the model:

Clarity and precision above all: Write instructions in a clear and unambiguous way. Avoid generalities and vague wording. It's better to write multiple precise sentences than one overly brief line. AI models respond well to detailed guidelines - a longer description can be helpful as long as it stays relevant and to the point. Remember, the same rules apply when writing any prompt: clarity, specificity, and accuracy matter most. Don't hesitate to clarify every important point.

Structure the instructions (lists, bullet points): To make it easier for both you and the model to understand all guidelines, use bullet points or numbered lists for key points. For example, an instruction can be organised as a list:

  • topic scope,

  • tone of response,

  • response format,

  • what to avoid, etc.

This layout improves readability and reduces the risk of missing something. Moreover, research shows that models follow instructions better when they are presented in a clear, structured format (like a step-by-step list). It's also useful to use simple Markdown formatting (headings, numbered lists, bullet points) in the instruction text - this helps organise the content and can assist the model in interpreting commands correctly.

Define order and priorities: If certain guidelines are more important than others or should be followed in a specific sequence, make that clear. For example, you can write: "First, always check if the question relates to topic X; second, if yes - provide a detailed answer using knowledge Y; third, if not - respond that you cannot help." By clearly indicating the order of actions, you give the model a step-by-step plan to follow.

Include examples in the instructions (if possible): Sometimes, it's helpful to show the agent a sample dialogue or response format. For example, you can include in the instruction: "If the user asks, 'How do I get a duplicate insurance card?', reply with: 'To get a duplicate, you need to…'." Such a built-in example gives the model a pattern to follow. However, use this carefully - overly detailed examples can consume too much space in the prompt. In practice, tools like Copilot Studio typically rely on descriptions, while with the raw OpenAI API, you can use a one-shot or few-shot prompt (a few example exchanges).

Ensure consistency and context: All parts of the instructions should be consistent and not contradictory. Avoid situations where one part tells the agent to do something that another part forbids. Also, make sure the style of the instructions matches what you want to achieve - for example, if you want the agent to be humorous, you can even write the instructions in a slightly playful tone. The model often picks up on the style directly from the instructions themselves.

Limit unnecessary information: Focus on what is essential. You don't need to explain to the model why it should do something - just state what it should do. Also, avoid unnecessary jargon and complex sentences that may cause confusion. The simpler (but precise), the better. Every sentence in the instructions should have a clear purpose.

Define how to handle lack of knowledge or out-of-scope questions: It's good practice to include a fallback in the instructions - a way for the agent to respond when it doesn't know the answer or receives a question outside its scope. For example, you can instruct: "If the question is outside your scope, reply: 'I'm sorry, but I don't have information about that.'" or "If you're unsure, ask the user for more details instead of guessing." Such guidance makes the agent more resilient to unusual situations.

Use the platform's capabilities: If you're building an agent in a tool that offers ready-made integrations (such as actions, variables, or functions in Copilot Studio), don't hesitate to use them. In Copilot Studio, you can include references to specific tools/actions in the instructions — for example: "Use the /CreateOrder action when…". The system will suggest the correct action name when you type "/". Just make sure that the names used in the instructions exactly match the names of the tools or topics configured for the agent — minor differences might prevent the model from recognising them. If you're using the raw API, you can achieve the same effect by adding information about available functions or data in the system message.

Test and iterate: After writing the initial instructions, test the agent with various sample questions. Check whether it follows all guidelines. If you notice that any part of the instructions is ignored or the answers are suboptimal - modify the instructions and test again. It often takes several iterations to refine the wording perfectly. Remember, prompt engineering is a process of trial and adjustment - even a small change in phrasing can improve results. Use A/B testing if possible: compare the agent's responses with different instruction versions and choose the best one.

Real-World AI Agent Examples

Let's bring it to life. Below are four practical AI agents you can build using Microsoft Copilot Studio — complete with real use cases, clear instructions, and behaviour tailored to specific business needs.

HR Benefits Assistant

Use Case: Answering employee questions about health plans, dental coverage, parental leave, and other benefits — across regions.

How It Works:

  • Searches HR policies in SharePoint by country 

  • Responds with comparison tables (e.g., plan, provider, extras) 

  • Speaks in a professional, helpful tone

  • Politely declines off-topic questions (e.g., payroll or equipment)

Sample Instruction Snippet:

"Answer only questions related to employee benefits… Present health plan comparisons in a table with bold highlights. Use only SharePoint documents relevant to the employee's country."

What Makes It Effective:

  • Prevents the AI from "hallucinating" answers by staying within scope

  • Visual format (tables) improves comprehension

  • Built-in fallback for questions outside HR ensures trustworthiness

Text Summariser Agent

Use Case: Condensing long emails, documents, or meeting notes into bullet-point summaries — perfect for consultants or managers.

How It Works:

  • Extracts 5–7 factual bullet points from any text 

  • Does not rely on external knowledge — just transformation

  • Uses a formal, neutral tone

  • Refuses tasks outside its role (e.g., "write an email")

Sample Instruction Snippet:

"Summarise texts in bullet-point form. Keep each point short and factual. Never skip key content. Do not engage in other tasks."

What Makes It Effective:

  • Keeps output clean and consistent

  • Great for overloaded teams that need fast insights

  • Simple to deploy, even without external integrations

IT Helpdesk Virtual Assistant

Use Case: First-line tech support for company employees — handling common issues like password resets, VPN setup, and software guidance.

How It Works:

  • Offers step-by-step instructions in plain language 

  • Searches SharePoint IT guides and troubleshooting pages

  • Responds with a calm, patient tone

  • Knows its limits — declines questions about HR, payroll, or personal devices

Sample Instruction Snippet:

"Support scope: work email, VPN, Office apps, hardware. Use numbered steps. Avoid jargon. Politely redirect HR/finance questions."

What Makes It Effective:

  • Mimics real-life support agent behaviour

  • Prevents misguidance by sticking to supported topics

  • Adds a human touch with empathy-focused tone

User Story Helper Bot

Use Case: Helping consultants and product teams write better user stories — quickly and consistently.

How It Works:

  • Turns rough notes into structured user stories

  • Follows a fixed format (As a [role], I want [goal], so that [reason])

  • Create user stories based on clarity, completeness, and business value

  • Flags missing info and asks follow-up questions

Sample Instruction Snippet:

"Structure every story with: Header, AS-IS/TO-BE, Functional Requirements, Acceptance Criteria. Never assume missing details."

What Makes It Effective:

  • Encourages consistency across teams

  • Saves time during sprint planning

  • Flags weak or incomplete stories early

Summary

Each agent above showcases how clear instructions + relevant data (like SharePoint) can create focused, useful assistants. Whether it's helping HR, summarising docs, guiding users through IT issues, or improving your product backlog — agents can take on tasks your team doesn't need to repeat manually.

Let me know if you'd like this section exported as a visual block (e.g., PowerPoint slide, image tile, or web section mockup). I can also help you build a Copilot Studio-ready version.

Final Checklist: What Your Agent Needs

Before hitting publish on your agent, ask:

  • Is the scope clearly defined?

  • Is the tone aligned with your audience?

  • Are the data sources accurate and accessible?

  • Are the boundaries well enforced?

  • Have you tested with real users?

Need help launching your first agent?

Designing powerful, trustworthy AI agents doesn't require guesswork — just the right structure, examples, and a platform like Microsoft Copilot Studio. If you use SharePoint, you're already ahead of the game. It's one of the best data sources for AI agents: searchable, structured, and secure.

At ARP Ideas, we help organisations like yours design AI agents that actually solve real problems — not create new ones. Let's build something smart, fast, and reliable — together.

Image

Author: Dobrosław Przebieracz

Business Lead / MS Power Platform Architect Expert

Combines business and system analysis with designing solutions tailored to each client’s needs. Has led numerous projects involving Dynamics 365 CRM, SharePoint, and custom-built applications. Passionate about new technologies and AI since 2019 (before it became mainstream). At ARP Ideas, he drives consulting best practices and organizational improvements.

Related Articles