In the age of information overload, staying on top of technical news is a full-time job. I’ve been working on a Python-based CLI utility that acts as a personal technical journalist: it digests web content using local LLMs and helps you share the takeaways to Mastodon with zero friction.
What is it?
This tool is an interactive bridge between your local Ollama models and your social presence. While it functions as a standard chat interface, its real power lies in its URL-to-Summary workflow. Simply paste a link, and the tool fetches the content, analyzes it, and produces a concise, journalistic summary ready for posting.
Key Features
- Local-First Intelligence: Powered by Ollama, it lets you choose between any of your locally installed models. It even supports "Thinking" models (like DeepSeek-R1), showing you the AI's reasoning process in real-time.
- Automated Web Scraping: Using
BeautifulSoup, the tool strips away headers, footers, and ads from any URL you provide, feeding only the relevant content to the LLM for analysis. - The "Technical Journalist" Persona: It uses a specialized system prompt to ensure summaries are objective, data-driven, and focused on "the what" and "the why," bypassing marketing jargon.
- Smart Mastodon Integration: Mastodon’s 500-character limit can be tricky. This tool calculates the "weighted" length of your post (accounting for URL weights) and—if the response is too long—it automatically re-prompts the LLM to rewrite a more concise version until it fits perfectly.
- Clipboard & Workflow: Every response is automatically copied to your clipboard, making it easy to use the generated text elsewhere even if you don't post it immediately.
The Tech Stack
The project is built on a clean, modular Python foundation:
- Ollama API: For local model orchestration.
- Requests & BeautifulSoup4: For robust web content extraction.
- Mastodon.py: For seamless API interaction.
- Pyperclip: For instant clipboard access.
- Humanize: For readable model management in the CLI.
Why Use It?
By running everything locally, you get total privacy and zero API costs for your LLM usage. It’s an efficient way to curate content, summarize long-form articles, and maintain an active social media presence without leaving your terminal.
https://github.com/mainmeister/run_ollama.git
#Python #Ollama #LocalAI #Mastodon #OpenSource #LLM #Automation



