Back to The NetworkContract / Hourly | Remote | Async-First

Prompt Engineer & AI Workflow Specialist

Owns output quality and reliability across AI components. Designs prompt architectures, builds Copilot Studio agents, creates evaluation frameworks, and tunes AI workflows grounded on enterprise data inside the client security boundary.

What you'll do

Day-to-day

  • Design and iterate on prompt architectures for classification, summarization, extraction, drafting, and routing tasks.
  • Build Copilot Studio conversational agents grounded on SharePoint knowledge bases, SOPs, and enterprise data.
  • Design prompts for Azure AI Document Intelligence workflows: invoice parsing, JHA digitization, field document extraction.
  • Build evaluation frameworks: test suites, scoring rubrics, and regression checks for prompt performance.
  • Develop and maintain prompt templates and libraries across client engagements.
  • Test edge cases, adversarial inputs, and failure modes before they reach production.
  • Tune AI workflows: select models, configure routing logic, optimize for cost/quality/latency tradeoffs within Azure OpenAI Service.
  • Define and enforce guardrails: output validation, content filtering, confidence thresholds, human-in-the-loop triggers, and tenant-boundary data controls.
  • Prototype rapid AI experiments: build minimum-viable AI workflows to validate use cases before full engineering investment.
  • Document prompt design decisions and maintain version-controlled prompt repositories.
What we're looking for

You should

  • Deep, practical experience designing prompts for production systems, not just chat experiments.
  • Understand model behavior across providers (Azure OpenAI, Claude, GPT, open-source) and can make informed model selection recommendations.
  • Experience or strong interest in Copilot Studio agent design and RAG architectures over enterprise knowledge bases.
  • Think in systems. You see prompts as components inside larger workflows, not standalone instructions.
  • Rigorous about evaluation. You build tests, measure outputs, and make data-driven decisions about what is working.
  • Can prototype quickly. Comfortable with Python, API calls, and lightweight tooling to stand up test workflows fast.
  • Understand enterprise data boundaries: AI features must be grounded on verified sources with no data leakage outside the tenant.
  • Communicate clearly in writing. You document your reasoning, your test results, and your recommendations.
Nice to have

Bonus points

  • Experience building RAG systems, agentic workflows, or tool-use architectures
  • Hands-on experience with Copilot Studio or Azure OpenAI Service in production
  • Background in NLP, computational linguistics, or ML evaluation methodology
  • Familiarity with Azure AI Document Intelligence for document processing prompts
  • Exposure to compliance or safety-critical contexts where output reliability is non-negotiable

Interested?

Tell us what you build. If there's a fit, we'll bring you onto the next engagement that needs your skills.

Apply to the network