run_ollama: Local AI Web-to-Mastodon Summarizer
A Python-based CLI utility that acts as a personal technical journalist: it digests web content using local LLMs (via Ollama) and helps you share the takeaways to Mastodon with zero friction.
Key Features
- Local-First Intelligence: Powered by Ollama, it lets you choose between any of your locally installed models. It even supports "Thinking" models (like DeepSeek-R1), showing you the AI's reasoning process in real-time.
- Automated Web Scraping: Using
BeautifulSoup, the tool strips away headers, footers, and ads from any URL you provide, feeding only the relevant content to the LLM for analysis. - The "Technical Journalist" Persona: It uses a specialized system prompt to ensure summaries are objective, data-driven, and focused on "the what" and "the why," bypassing marketing jargon.
- Smart Mastodon Integration: Mastodon’s 500-character limit can be tricky. This tool calculates the "weighted" length of your post (accounting for URL weights) and—if the response is too long—it automatically re-prompts the LLM to rewrite a more concise version until it fits perfectly.
- Clipboard & Workflow: Every response is automatically copied to your clipboard, making it easy to use the generated text elsewhere even if you don't post it immediately.
BeautifulSoup, the tool strips away headers, footers, and ads from any URL you provide, feeding only the relevant content to the LLM for analysis.Tech Stack
- Ollama API: For local model orchestration.
- Requests & BeautifulSoup4: For robust web content extraction.
- Mastodon.py: For seamless API interaction.
- Pyperclip: For instant clipboard access.
- Humanize: For readable model management in the CLI.
Setup
Clone the repository:
git clone https://github.com/mainmeister/run_ollama.git
cd run_ollamaInstall dependencies: This project uses uv for dependency management:
uv sync
Configure environment variables: Create a .env file based on .env.example:
cp .env.example .env
Edit .env and provide your Mastodon credentials:
MASTODON_BASE_URL=https://your.mastodon.instance
MASTODON_ACCESS_TOKEN=your_access_token_here
# Optional: MASTODON_VISIBILITY=public
# Optional: OLLAMA_HOST=http://remote.host:11434
# Optional: DISABLE_CLIPBOARD=true
Clone the repository:
git clone https://github.com/mainmeister/run_ollama.git
cd run_ollamaInstall dependencies: This project uses uv for dependency management:
uv syncConfigure environment variables: Create a .env file based on .env.example:
cp .env.example .envEdit .env and provide your Mastodon credentials:
MASTODON_BASE_URL=https://your.mastodon.instance
MASTODON_ACCESS_TOKEN=your_access_token_here
# Optional: MASTODON_VISIBILITY=public
# Optional: OLLAMA_HOST=http://remote.host:11434
# Optional: DISABLE_CLIPBOARD=true
4. Run the script:
```bash
uv run main.py
```
*Tip: Use `uv run main.py --no-clipboard` to disable automatic copying, or `uv run main.py --auto` to automatically select defaults and skip confirmation prompts.*
```bash
uv run main.py
```
*Tip: Use `uv run main.py --no-clipboard` to disable automatic copying, or `uv run main.py --auto` to automatically select defaults and skip confirmation prompts.*
Usage
- Select a Model: Choose from your locally installed Ollama models. The list indicates which models have "Thinking" capabilities.
- Smart Prompting: If your clipboard contains a URL, it is automatically offered as the default prompt—just press Enter to use it.
- Summarize URL: Enter any URL to have the tool fetch, clean, and summarize its content using the "Technical Journalist" persona.
- Post to Mastodon: Review the summary and confirm if you want to post it to Mastodon. The tool handles character limits and automatic rewriting.
CLI Flags
--no-clipboard: Disables automatic copying of responses to the clipboard.--auto: Skips most interactive prompts by selecting defaults (useful for semi-automated workflows).
--no-clipboard: Disables automatic copying of responses to the clipboard.--auto: Skips most interactive prompts by selecting defaults (useful for semi-automated workflows).Privacy & Security
- Local Processing: Your web content and prompts are processed by your own Ollama instance. By default, this is a local server, meaning no data leaves your machine. However, if you configure a remote
OLLAMA_HOST, your data will travel over the network to that server. - SSRF Awareness: The application includes built-in protection and awareness for Server-Side Request Forgery (SSRF). It resolves hostnames to all possible IP addresses (including IPv6) and will warn you (requiring confirmation) before fetching content from private, reserved, or loopback network ranges (e.g., your local router or local services).
- Prompt Injection Mitigation: For added security, the tool uses unique delimiters (
[WEBPAGE CONTENT START/END]) and strict instructions to prevent untrusted webpage content from overriding the system's journalistic persona. - Download Limits: To prevent resource exhaustion, the tool only downloads up to 1MB of content from any provided URL.
- Privacy Controls: Clipboard copying and Mastodon post visibility are configurable via environment variables (
DISABLE_CLIPBOARD, MASTODON_VISIBILITY) or CLI flags, putting the user in control of their data. - Environment Safety: The application includes a built-in check to warn you if your
.env file containing credentials is being tracked by Git, helping you avoid accidental leaks. - Limited Access: For maximum security, use a Mastodon "App" token with limited scopes (
write:statuses only) rather than a full-access token. This ensures the application can only post updates and cannot access your private messages or account settings. - Dependency Security: The project uses pinned dependency versions and is regularly audited for known vulnerabilities (e.g., via
pip-audit) to ensure a secure and stable environment for the user.
OLLAMA_HOST, your data will travel over the network to that server.[WEBPAGE CONTENT START/END]) and strict instructions to prevent untrusted webpage content from overriding the system's journalistic persona.DISABLE_CLIPBOARD, MASTODON_VISIBILITY) or CLI flags, putting the user in control of their data..env file containing credentials is being tracked by Git, helping you avoid accidental leaks.write:statuses only) rather than a full-access token. This ensures the application can only post updates and cannot access your private messages or account settings.pip-audit) to ensure a secure and stable environment for the user.License
MIT
links
https://github.com/mainmeister/run_ollama.git
https://youtu.be/OQ1tGI_7VBk?si=qc1_T6n_hZct1USN
#Python #Ollama #LocalAI #Mastodon #OpenSource #LLM #Automation

No comments:
Post a Comment