We just launched browsers.bug0.com. It lets you spin up cloud browsers instantly for automation, testing, and agentic workflows. I'm excited about how well this fits with our open-source tools: - Passmark for AI-powered regression testing / automation - Bug0 browsers for browser infrastructure that scales Put together, you get a much simpler stack to automate browser tasks end to end - from tests, to repetitive workflows, to full agent-driven actions. The goal is straightforward: make browser automation feel easy again. Try it here: https://browsers.bug0.com
Launch: Instant Cloud Browsers for Automation and Testing
More Relevant Posts
-
Cloudflare just made it easier to build scrapers. Their new Browser Rendering product now supports Chrome DevTools Protocol. Managed, remote Chrome instances with Puppeteer, Playwright, and AI agent support. Their own announcement says coding agents can connect via MCP, navigate pages, and execute JavaScript on Cloudflare's infrastructure. Scales automatically. Runs when your laptop is off. Any cloud provider can host automation tools. AWS and Google Cloud do it too. But Cloudflare is in a unique position here: they sell bot protection to publishers as a core product. They're now on both sides of the same transaction. The company you pay to stop browser automation is also the company offering managed browser automation as a service. We hear this from publishers constantly at Centinel. "Why does our bot problem keep getting worse despite what we're spending?" Part of the answer might be structural. When the same company profits from both the attack infrastructure and the defense layer, the defense only needs to work well enough to justify the next invoice. Not well enough to make itself unnecessary. This isn't unique to Cloudflare. It's a pattern across the bot protection market. Vendors benefit from the threat persisting. The question is whether buyers are factoring that into their evaluations. If you're choosing a bot protection vendor right now, how much weight do you give to whether they also profit from the other side?
To view or add a comment, sign in
-
-
I gave Claude Code one instruction and walked away with a full Automation test suite running live on TestMu AI Cloud. Here's exactly what happened. The starting point: Zero test files. Just an idea — "I want Playwright tests for the Airbnb login flow." What I asked: "Write Playwright tests (https://lnkd.in/gvHvzpRX) for Airbnb login and run them on TestMu AI Cloud across Chrome, Firefox, Edge, Safari — Windows and macOS." That's it. One ask. What the TestMu AI Playwright Skill did: → Scaffolded the entire project — playwright.config.ts, page objects, test specs, .env for credentials → Built a proper Page Object Model for the Airbnb login flow covering: Login modal open/close Email & phone input validation Social login options (Google, Facebook, Apple) Password field behavior Security checks → Wired the LambdaTest CDP connection automatically — the lambdatest-setup.js fixture that routes tests to cloud vs local based on project name, handles session lifecycle, and reports pass/fail status back to the dashboard → Configured the full browser matrix in playwright.config.ts: Chrome 120, 121, latest Windows 10, Windows 11, macOS Sonoma, macOS Ventura Plus Edge, Firefox, Safari — 15 combinations total What I saw in the TestMu AI Web Automation Dashboard: ✅ Every test session listed with browser, OS, and version ✅ Live video recording of each test run ✅ Screenshot on failure — exactly where it broke ✅ Network logs, console logs, full trace ✅ Pass/fail status reported automatically from the test itself 30 tests. 15 browser/OS combos. 22 passing on first run — without touching a single VM or installing a single browser. The failures? Selector edge cases specific to how Airbnb renders its modal on different regions. The skill gave me the structure to fix them in minutes, not hours. What used to take me a full day: Setting up cloud credentials Writing the CDP connection boilerplate Structuring page objects Configuring the browser matrix What it took with TestMu AI Agent Skills: One conversation with Claude Code. This isn't AI writing bad test code and hoping it works. This is an agent that already knows production Playwright patterns, LambdaTest integration, and real-world project structure — out of the box. 🔗 Try it yourself: https://lnkd.in/gvHvzpRX #TestMuAI #Playwright #LambdaTest #TestAutomation #AI #ClaudeCode #OpenSource #CrossBrowserTesting #QA
To view or add a comment, sign in
-
Struggling with flaky logins and anti-bot walls? 🧱 Browser automation is easy until it hits production. We’ve broken down the top platforms—from Browserbase's managed infra to Axiom's no-code bots—to help you scale without the headaches. Read more: https://lnkd.in/d3i_hb2m #RPA #Playwright #AIagents
To view or add a comment, sign in
-
👀 Cloudflare just made it possible to give AI agents a browser Without giving them YOUR browser 👇 If you're building AI agents that browse the web, you've had two bad options: let the agent use your local browser (with all your cookies, credentials, and open sessions), or spin up and manage your own headless Chrome infrastructure. Cloudflare just launched CDP support for Browser Rendering, and it solves both problems. 🔧 What CDP actually is Chrome DevTools Protocol is the low-level protocol that powers Puppeteer, Playwright, and pretty much every browser automation tool. It's what Chrome DevTools itself uses under the hood. Cloudflare now hosts managed browser instances you can connect to via WebSocket. 🤖 Why AI agent builders should care If your agent supports MCP, you add a few lines of config, and it can navigate pages, take screenshots, run audits, and debug JavaScript on a remote browser. Claude Code, Cursor, and OpenCode all work with it. The agent never touches your local browser; it runs when your laptop is off, and Cloudflare handles the scaling. ⚡ Why automation teams should care If you already have Puppeteer or Playwright scripts, you swap one WebSocket endpoint URL and add your Cloudflare API key. Your existing code runs on managed infrastructure with no rewrite required. The part I find most interesting is what this means for MCP adoption. Browser access has always been the hardest capability to safely grant to an agent. I haven't tested how it handles sites with heavy client-side rendering or complex auth flows yet, so I can't vouch for the edge cases. But the core idea of separating the agent's browser from your browser feels like the right architecture. Full docs in the comments 👇 Are you giving your AI agents browser access yet, or is that still a line you haven't crossed? #Cloudflare #AIAgents #WebDev #MCP #BrowserAutomation
To view or add a comment, sign in
-
-
Claude vs Gemma (Browser agent part 2) I gave two models the exact same browser automation job. Same 7 tasks. Same site. Same system prompt. Same selectors available on the page. The results: Claude Opus 4.6 - 7/7 passed. 2,559 tokens. 4.6 seconds. Gemma 4 27B (local) - 4/7 passed. 11,958 tokens. 5 minutes 14 seconds. That's 100% vs 57%. But the interesting part isn't the score - it's why Gemma failed. All three failures trace to the same root cause: selector hallucination. The generated scripts are structurally correct - right imports, right Playwright API calls, right test flow. But the selectors are invented: T4 (Add to Cart): Gemma wrote [data-test="cart-badge"]. The actual element is .shopping_cart_badge. T5 (Checkout): Gemma wrote .sc-80000-0. The actual element is .complete-header. T7 (Remove from Cart): Gemma used "standard_cart" as the username. The actual credential is "standard_user". Three failures, three near-misses. The model understands the task perfectly. It just hallucinates the last mile - inventing selectors that follow naming conventions but don't exist on the page. On MCP it gets worse: Gemma drops to 1/7 passed and burns 284,562 tokens trying. It can't reliably parse the accessibility tree snapshots that MCP returns. $100/month (Max subscription) for 100% reliability. $0 local inference for 57%. For browser automation today, the frontier model earns its keep - but the failure pattern suggests the gap is about grounding, not reasoning. Better selector maps and few-shot examples might close it. For a full failure teardown (with the actual generated code vs what the page expects) read my Substack post (link in the comment)
To view or add a comment, sign in
-
-
I won’t explain why, you’ll know when you read it, But if there is one tech blog you should subscribe to today, it’s this ⬇️ Packed with valuable experiments explained and insights from practice. Go, check for yourself.
Test engineering - building the frameworks, the processes, and the engineers who run them. Now with agents in the loop.
Claude vs Gemma (Browser agent part 2) I gave two models the exact same browser automation job. Same 7 tasks. Same site. Same system prompt. Same selectors available on the page. The results: Claude Opus 4.6 - 7/7 passed. 2,559 tokens. 4.6 seconds. Gemma 4 27B (local) - 4/7 passed. 11,958 tokens. 5 minutes 14 seconds. That's 100% vs 57%. But the interesting part isn't the score - it's why Gemma failed. All three failures trace to the same root cause: selector hallucination. The generated scripts are structurally correct - right imports, right Playwright API calls, right test flow. But the selectors are invented: T4 (Add to Cart): Gemma wrote [data-test="cart-badge"]. The actual element is .shopping_cart_badge. T5 (Checkout): Gemma wrote .sc-80000-0. The actual element is .complete-header. T7 (Remove from Cart): Gemma used "standard_cart" as the username. The actual credential is "standard_user". Three failures, three near-misses. The model understands the task perfectly. It just hallucinates the last mile - inventing selectors that follow naming conventions but don't exist on the page. On MCP it gets worse: Gemma drops to 1/7 passed and burns 284,562 tokens trying. It can't reliably parse the accessibility tree snapshots that MCP returns. $100/month (Max subscription) for 100% reliability. $0 local inference for 57%. For browser automation today, the frontier model earns its keep - but the failure pattern suggests the gap is about grounding, not reasoning. Better selector maps and few-shot examples might close it. For a full failure teardown (with the actual generated code vs what the page expects) read my Substack post (link in the comment)
To view or add a comment, sign in
-
-
The perennial debate in browser automation continues to evolve, and for good reason. Choosing the right tool isn't just a technical preference; it dictates efficiency, test coverage, and ultimately, the robustness of your digital products. A recent piece from Firecrawl caught my eye, diving into the "Playwright vs Puppeteer" discussion. The article, published back in February, touches on how Puppeteer often shines for Chrome-specific tasks and stealth web scraping, while both are invaluable for critical processes like screenshot testing and performance analysis. My take? While Puppeteer certainly has its loyalists and excels in its niche, particularly within the Chromium ecosystem, Playwright has undeniably established itself as the more versatile and future-proof option for most general-purpose browser automation. Its native support across Chromium, Firefox, and WebKit without needing separate drivers significantly reduces overhead and complexity for teams building cross-browser compatible applications. This isn't to say Puppeteer is obsolete; far from it. For projects deeply entrenched in Chrome or requiring its specific nuances, it remains a powerful choice. However, for those looking to maximize test coverage and streamline their CI/CD pipelines across diverse browser environments, Playwright’s multi-browser, multi-language API is a compelling advantage. It often simplifies the developer experience by providing a consistent interface, cutting down on the context switching and unique configurations that can plague cross-browser testing efforts. You can read their full breakdown here: https://lnkd.in/gQ2qmPG2 Ultimately, the 'best' tool is the one that best fits your team's specific requirements and future roadmap. But as the web continues to fragment and diversify, are we seeing a definitive shift towards broader, more integrated solutions like Playwright, or will specialized tools like Puppeteer always maintain their unique, indispensable place in the developer toolkit? #BrowserAutomation #DevTools
To view or add a comment, sign in
-
How do I achieve 5X web dev speed and how can you up your web dev by 5x with a single MCP, doing a feedback loop of Agents so the agents Build -> Test -> Troubleshoot -> Fix -> Test Again with really no effort. Read carefully to be 5x. Chrome DevTools MCP is an official MCP server built by Google's Chrome team. One command and your Claude Code agent gets full access to Chrome DevTools Protocol. The same APIs that Chrome DevTools uses internally. Your agent can now see your actual rendered pages, measure real performance, and fix what it finds. Here's how to set it up in 4 steps: 1. Install in Claude Code. -> claude mcp add chrome-devtools -> npx chrome-devtools-mcp@latest No API keys. No browser extension. Chrome launches automatically on first use. 2. Ask Claude to test your app. "Test the performance of localhost:3000" or "Check accessibility on my dashboard." Claude opens Chrome, navigates your page, runs a performance trace, measures Core Web Vitals, and returns a full report. 3. Get real numbers, not guesses. LCP under 2.5s? CLS under 0.1? INP under 200ms? Claude measures all three. It captures network traffic, identifies slow API calls, finds console errors, validates WCAG accessibility, and emulates mobile devices with throttled networks. 27 tools across 6 categories. 4. Fix and verify in a loop. Claude doesn't just report problems. It fixes the code, re-opens the browser, re-measures, and confirms the fix worked. No more "try this and let me know." The agent knows if it worked because it can see the result. Your AI agent now builds a feature, opens it in a real browser, checks Core Web Vitals, validates accessibility, tests on mobile viewports, finds bottlenecks, fixes them, and verifies. One loop. No screenshots. No back-and-forth. Code that ships with performance, accessibility, and quality baked in from the first commit. Alternatives worth exploring: → Playwright MCP: Cross-browser testing (Firefox, WebKit). Heavier on context. Use for multi-browser support. → Lighthouse CI: Automated audits in CI. Gate PRs on performance budgets. → Browserbase + Stagehand: Cloud browsers for agent automation at scale. → WebPageTest API: Deep perf analysis with filmstrip views and waterfalls. Stop building blind. Give your agents eyes.
To view or add a comment, sign in
-
-
Static site (brochure) - Milestone 1: design ₹15k (Figma) - Milestone 2: build ₹40k (Hugo + Tailwind, deploy GitHub Pages—free) - Milestone 3: SEO/copy ₹15k (meta, GSC) Total ~₹70k. Cheap hosting, near-zero maintenance. Dynamic web app (lead tracker) - Milestone 1: specs ₹25k - Milestone 2: MVP ₹1.3L (Next.js, Node/Express, MongoDB Atlas free tier, Vercel) - Milestone 3: QA/launch ₹30k Total ~₹1.85L. Pay per milestone; cloud free tiers keep ops low-cost.
To view or add a comment, sign in
-
📖 Browser Fingerprinting Explained: What Websites Know About Your Scraper (And How to Fix It) This article explains how modern anti-bot systems fingerprint browsers to detect scrapers, and provides detailed technical fixes and strategies for Playwright users to evade detection when scraping websites. #Playwright #E2E #TestAutomation #FrontEnd
To view or add a comment, sign in
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development