🚨 Python Program: Find Errors in Logs ```python log = [ "INFO: Server started", "ERROR: Database failed", "WARNING: High CPU" ] for line in log: if "ERROR" in line: print(line) ``` 💡 Real Cloud Support use case: ✔ Identify issues quickly ✔ Automate troubleshooting #Python #CloudSupport #Automation
Python Log Error Finder
More Relevant Posts
-
🔢 Python Data Types (With Real Use) ✔ int → CPU usage (80) ✔ str → log message ("Error occurred") ✔ list → multiple servers ✔ dict → API response Cloud Support = Data handling Python makes it easy. #Python #TechSkills
To view or add a comment, sign in
-
🚀 Python Automation Script (Real Use Case) ✔ Check API ✔ Check CPU ✔ Print alert ```python import requests, psutil if requests.get("https://api.github.com").status_code != 200: print("API Down") if psutil.cpu_percent() > 80: print("High CPU Usage") ``` 💡 This is how real cloud monitoring works. #Python #CloudSupport #Automation
To view or add a comment, sign in
-
📜 Python Automation: Log Error Counter ```python count = 0 with open("log.txt") as f: for line in f: if "ERROR" in line: count += 1 print("Total Errors:", count) ``` 💡 Real Cloud Support task: ✔ Quick issue detection #Python #LogAnalysis
To view or add a comment, sign in
-
I realized I was performing the same checks daily: server health, connectivity, logs, and basic validations. While none of these tasks were complex, they accumulated over time, making it easy to overlook something during busy shifts. To address this, I began automating small tasks using simple Python and Bash scripts. These scripts run checks, validate outputs, and flag issues early. The results were significant: • Reduced manual effort • Faster checks • Increased consistency This experience highlighted an important lesson: automation isn't solely about saving time; it's also about minimizing human error. Now, whenever I find myself repeating a task multiple times, I ask myself: can this be automated #automation #datacenter #python #devops #infrastructure
To view or add a comment, sign in
-
One class swap gave us a 33x p95 improvement. Python's redis-py has two connection pool implementations and we were ofc using the wrong one. The default ConnectionPool apparently holds a single asyncio lock while reconnecting stale connections. TLS handshake takes 1-5ms while every other coroutine on the pod waits. At 500+ rps a few stale reconnects cascade. Coroutines start to pile up behind the lock, timeouts trigger retries, retries pile up more... The funny thing is that Redis server only sits at 10% CPU lol. BlockingConnectionPool moves the reconnect outside the lock. The lock is held for microseconds and the reconnect now happens in parallel.
To view or add a comment, sign in
-
0.401 vs 0.928 table accuracy. the only difference? which PDF parser you pick opendataloader-pdf just hit 0.928. same PDFs. same tables. 2.3x more accurate. runs locally. no GPU. no cloud. 60+ pages per second on CPU. the part nobody talks about: built-in prompt injection filtering. turns out PDFs can hide invisible text (zero-size fonts, transparent layers) designed to hijack your LLM. most parsers pass that straight through to your model. 17K+ github stars. python, node, java SDKs. langchain integration in 3 lines. if youre debugging RAG quality and havent looked at your parser, youre optimizing the wrong layer. link in comments
To view or add a comment, sign in
-
-
Last Friday, I put together a quick Python 🐍 project demonstrating two different approaches to network automation and data extraction on Cisco devices: 1️⃣ Using netmiko to connect via SSH, run show ip ospf neighbor, and use Regex to parse the unstructured text into clean, actionable neighbor states. 2️⃣ Using the requests library to query a RESTCONF endpoint and pull interface configurations natively in structured JSON. Why build both? Because it highlights where the industry is heading, and the reality of the mixed hardware I'm actually dealing with in my lab right now. In my repo, you can see this split: the older Catalyst 3750 gets Netmiko because that platform predates model-driven programmability, while the 4451-X running IOS-XE gets RESTCONF because it exposes an HTTP-based API with YANG model support, enabling structured JSON data retrieval rather than CLI text scraping. I have kept my GitHub repos mostly to myself so far, but I think it would be beneficial to share the code. Check out the code and the sanitized examples here: https://lnkd.in/dRwVvwDv #NetworkAutomation #Python #Netmiko #RESTCONF
To view or add a comment, sign in
-
I built a local-first visual debugger for Python agents. If you’ve ever had an agent return a “fine” answer while the middle of the run was slow, expensive, wrong, or just impossible to inspect, this is for you. `flow-xray` lets you: - add `@trace` - run once - open one local HTML file You can inspect: - LLM calls - tool calls - branches and nested steps - errors - tokens and cost No cloud dashboard. No account. Just a local trace you can inspect in your browser. GitHub: https://lnkd.in/dR6vE5Ar PyPI: https://lnkd.in/dAA-cbRe pip install flow-xray #Python #LLM #AIAgents #OpenAI #DevTools #OpenSource #Debugging
To view or add a comment, sign in
-
-
😊❤️ Todays topic: Topic: requirements.txt in Python: ============= When working on a project, you install many packages. But how will someone else know which packages your project needs? That’s where requirements.txt comes in. What is requirements.txt? It is a file that stores all the dependencies (packages) required for your project. Example: django==4.2 requests==2.31.0 numpy==1.26.0 Create requirements.txt: pip freeze > requirements.txt Explanation: This command saves all installed packages with versions into the file. Install from requirements.txt: pip install -r requirements.txt Explanation: This installs all required packages in one command. Why is it important? Helps others run your project easily Ensures same package versions Useful in deployment (servers, cloud) Best Practice: Always use requirements.txt with a virtual environment. Interview Insight: requirements.txt ensures consistency across development, testing, and production environments. Quick Question: What will happen if versions are not specified in requirements.txt? #Python #Programming #Coding #InterviewPreparation #Developers
To view or add a comment, sign in
-
🚀 System Monitoring & Alert Pipeline I recently built a production-style system monitoring and alert pipeline using Python. This system automatically: • Monitors CPU, memory, and disk usage • Detects issues and scans logs for errors • Sends email alerts when thresholds are exceeded • Generates system health reports • Runs automatically using cron scheduling This project helped me strengthen my skills in automation, system reliability, and production scripting — all essential for Data Engineering and DevOps environments. #Python #DataEngineering #Automation #DevOps #LearningInPublic #TechPortfolio https://lnkd.in/enVvnqDv
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development