Skip to content

Agent Feedback Loop

The core use case for Notebind: your AI agent writes a document, humans review it, and the agent iterates based on feedback.

Agent writes markdown
Push to Notebind ──────────▶ Human reviews in browser
│ │
│ Leaves comments & suggestions
│ │
▼ ▼
Pull feedback ◀──────────── Feedback available via API
Agent processes feedback
Agent updates document
Push updated version ──────▶ Repeat
  1. Agent creates the document

    import requests
    API_KEY = "nb_sk_YOUR_KEY"
    BASE = "https://notebind.com/api"
    headers = {
    "Authorization": f"Bearer {API_KEY}",
    "Content-Type": "application/json"
    }
    # Create document
    doc = requests.post(f"{BASE}/documents", headers=headers, json={
    "title": "Q1 Report Draft",
    "content": agent_generated_markdown
    }).json()["data"]
    doc_id = doc["id"]
  2. Create a share link for reviewers

    share = requests.post(
    f"{BASE}/documents/{doc_id}/share",
    headers=headers,
    json={"permission": "comment"}
    ).json()["data"]
    review_url = f"https://notebind.com/share/{share['token']}"
    # Send review_url to human reviewers via email, Slack, etc.
  3. Poll for feedback

    import time
    def get_unresolved_feedback(doc_id):
    comments = requests.get(
    f"{BASE}/documents/{doc_id}/comments",
    headers=headers
    ).json()["data"]
    suggestions = requests.get(
    f"{BASE}/documents/{doc_id}/suggestions",
    headers=headers
    ).json()["data"]
    unresolved_comments = [c for c in comments if not c["resolved"]]
    pending_suggestions = [s for s in suggestions if s["status"] == "pending"]
    return unresolved_comments, pending_suggestions
    # Check periodically
    while True:
    comments, suggestions = get_unresolved_feedback(doc_id)
    if comments or suggestions:
    break
    time.sleep(60) # Check every minute
  4. Process feedback and update

    # Feed comments to your LLM for processing
    feedback_prompt = "Here are the review comments on the document:\n\n"
    for comment in comments:
    feedback_prompt += f"- {comment['body']}"
    if comment.get("anchor_text"):
    feedback_prompt += f" (on: \"{comment['anchor_text']}\")"
    feedback_prompt += "\n"
    for suggestion in suggestions:
    feedback_prompt += f"- Replace \"{suggestion['original_text']}\"\"{suggestion['suggested_text']}\"\n"
    # Generate updated content
    updated_content = your_llm.generate(feedback_prompt + "\n\nOriginal:\n" + doc["content"])
    # Push the update
    requests.patch(
    f"{BASE}/documents/{doc_id}",
    headers=headers,
    json={"content": updated_content}
    )
  5. Resolve processed feedback

    # Resolve comments that were addressed
    for comment in comments:
    requests.patch(
    f"{BASE}/documents/{doc_id}/comments/{comment['id']}",
    headers=headers,
    json={"resolved": True}
    )
    # Accept or reject suggestions
    for suggestion in suggestions:
    action = "accept" if should_accept(suggestion) else "reject"
    requests.patch(
    f"{BASE}/documents/{doc_id}/suggestions/{suggestion['id']}",
    headers=headers,
    json={"action": action}
    )

The same workflow using the CLI:

Terminal window
# Push initial document
notebind push report.md
# Create share link
notebind share DOC_ID --permission comment
# Check for feedback
notebind comments DOC_ID
notebind suggestions DOC_ID
# After updating the file locally
notebind push report.md
# Resolve feedback
notebind resolve DOC_ID COMMENT_ID
notebind accept DOC_ID SUGGESTION_ID
  • Don’t resolve feedback before processing it — use resolved/unresolved status to track what’s been addressed
  • Accept suggestions programmatically — if a suggestion is a simple text fix, accept it via the API instead of manually rewriting
  • Use pull for a complete snapshot — the CLI pull command returns the document with all comments and suggestions in one JSON blob
  • Create separate share links per reviewer — this lets you revoke access per person if needed