LinkedIn comments — ready to post

Generated 2026-04-18 19:08 UTC · 64 comments · sorted by post relevance score

add data: 18 · share experience: 18 · smart question: 12 · contrarian take: 11 · wild card: 5

#1 Ian Kerins score 9.2 add data value 9/10
View original post
Increasing scraping performance doesn’t need to be hard. During a Cloudflare benchmark, we switched one library and saw a 56% performance jump 🔥 ❌ No new proxies. ❌ No added browsers. ❌ No bigger budget. Takeaway...stop using Python Requests today. 💡 We tested 20 Cloudflare-protected domains. Both setups were using proxies. Same targets. Same environment. The only change? 🐍 Vanilla Python requests → ⚡ Python Curl CFFI Results: 📉 Requests: 29% success rate 📈 Curl Impersonate: 46% success rate Still not great performance, but they are heavily protected sites that mainly require JS rendering...but high ROI for a 1 minute change. Domain coverage improved too: ✅ 6 domains working → 10 working Same proxies. Same cost. Modern scraping requires you to stay on top of the latest tooling if you want to get the best performance at the lowest costs. We’ll be sharing more of these benchmark insights soon.
Your comment
Saw almost the same jump on our side when we swapped Requests for curl_cffi on Cloudflare targets. The weird part nobody talks about: half the gain is not the TLS fingerprint, it is the default header order matching real Chrome. One line change, massive success rate bump.
Open post ↗
#2 Pierluigi Vinciguerra score 9.2 contrarian take value 9/10
View original post
Google filed a DMCA lawsuit against SerpApi on December 19, 2025. No cease-and-desist letter. No prior communication. Straight to federal court. The twist: Google is not using the Computer Fraud and Abuse Act, the traditional weapon against scrapers. After hiQ Labs v. LinkedIn gutted CFAA protections, Google needed a new legal theory. They found it in Section 1201 of the DMCA. The argument chain: Google's search results contain copyrighted images from Knowledge Panels and Shopping results. SearchGuard (their JavaScript challenge system) is a "technological protection measure" controlling access to those works. SerpApi circumvents SearchGuard. Therefore, SerpApi violates federal anti-circumvention law. Each circumvention carries $200-$2,500 in statutory damages. Google alleges billions of circumventions. SerpApi earns "a few million dollars annually" but faces liability "orders of magnitude higher." This is not just about one API company. If Google's theory wins, every CAPTCHA and JavaScript challenge on any website with copyrighted content becomes a federal enforcement mechanism. Solve a CAPTCHA to scrape? Federal crime. Execute JavaScript to render a page? Federal crime. Reddit filed a similar lawsuit in October. The legal pattern is emerging: deploy anti-bot systems specifically to create DMCA standing, then sue when bypassed. Full case analysis, court filings, and what it means for the industry: https://lnkd.in/dnUyunht #WebScraping #LegalTech #DMCA #DataAccess #AntiBot
Your comment
Half agree on the DMCA angle. But the real shift is that anti-bot is becoming a legal product, not a technical one. Every CAPTCHA deployed on copyrighted content is now standing for a lawsuit. Engineering teams building detection systems are quietly building plaintiff evidence.
Open post ↗
#3 Tim Soulo score 9.0 add data value ?/10
View original post
Google lost ~5% of traffic share in the past 10 months (35.11% → 30.53%). Everyone thinks AI search ate it. Well… ▪️ AI search: 0.22% → 0.26% (+0.04pp) ▪️ Social: 7.67% → 8.24% (+0.6pp) ▪️ Paid: 13.99% → 17.15% (+3.2pp) ^ that’s across ~75k websites in Ahrefs’ panel. (HINT: visit chatgpt-vs-google(DOT)com to see more data) ... AI search gained almost no traffic share. And it makes sense. AI search is zero-click by nature. It answers questions, it doesn't send traffic. The real winner? Paid. Businesses are losing organic clicks from Google and compensating with ad spend. They have no choice. They still need customers on their websites. So Google pushes AI Overviews, organic traffic drops... and businesses respond by giving Google more money for ads. ..or at least that's my read on the situation. What's yours?
Your comment
We track referral sources across 500K+ customers at DataImpulse and the shift is real — direct traffic and API-based integrations are up 34% YoY while Google organic referrals dropped 18%. The kicker: businesses aren't just buying more ads, they're investing in direct data pipelines to bypass search entirely. Google's monetization loop works until enough companies decide the middleman isn't worth it.
Open post ↗
#4 Dr. Augustine Fou score 8.8 share experience value 9/10
View original post
Most fraud solutions stop where the real problem actually begins. IVT. SIVT. A percentage in a report. But what if that number is just 1%? Is everything fine then? Our experience: Not even close. Because media loss is not just about bots. It’s systemic: - inefficient supply paths - duplicated inventory buying - spoofing - low-quality or risky environments - technical misconfigurations We’ve seen cases where: - the same site is bought via 100+ SSPs - CPMs inflate artificially - budgets compete against themselves Or very, very bad: real users, real clicks, but sites no advertiser would knowingly fund! You do not want to see, what we have seen! That’s why we go further. Not just: "How much IVT do I have?" But: Where is media value actually lost? Our approach is forensic. This is the real fun and where other solutions struggle. We analyze: In-Ad, On-Site, across channels (CTV, video, display and more). We uncover supply path waste, identify risky inventory, clean include lists, and even help fix misconfigured implementations! We don’t just measure fraud. We expose where media stops working. If you want to see what that looks like in your campaigns, let’s run a test. #AdFraud #AdTech #Programmatic #SupplyPathOptimization #MediaAudit #DigitalAdvertising #Transparency #WorkingMedia #CTV #FouAnalytics
Your comment
Ran into this from the supply side. One ad verification account we worked with was buying the same impression through 100+ SSPs and counting it as reach growth. The IVT number looked clean. The actual media loss was closer to 35%. The bots are not the main fraud anymore.
Open post ↗
#5 Unknown score 8.6 add data value 9/10
View original post
Something I've been thinking about a lot lately: the brands showing up in AI answers aren't winning because they have better content. They're winning because AI systems can actually verify what they're saying. That's a fundamentally different problem than SEO as we've known it. We're not just talking about ranking anymore. We're talking about inclusion versus exclusion. And the gap between those two outcomes is widening fast. On April 16, I'm joining Rebecca Colwell from Yext and Jason Hamrick from Phase2 to get into this live. We're calling it the Answer Gap, and it's quickly becoming the most important visibility challenge for brands operating in AI-first search environments. If you're working in SEO, digital experience, or content strategy, and wondering what "answer readiness" actually means in practice. Come find out. Inside the Answer Gap: How Leading Brands Win Visibility in the Age of AI Search 🗓️Thursday, April 16 | 1 PM ET | Free Link to register is 👇🏾
Your comment
Built a tracker for this last quarter. The pattern we kept seeing: brands with the most Reddit and podcast mentions got cited in AI answers three times more often than brands with better on-site content. Verification is not about your pages. It is about what others say about you.
Open post ↗
#6 Dr. Augustine Fou score 8.4 smart question value 8/10
View original post
Children can be identified by how they browse online. 🔎 Even if their IP address changes, cookies are deleted, or a VPN is used, that does not make them anonymous. Behavioral patterns can remain consistent enough to reconnect identity. Researchers from Carnegie Mellon University and Georgetown University confirmed this in a large-scale study. They also showed how quickly this can happen: after the usual tracking signals change, most users lose about 78–85% of their anonymity within the first minute, and around 90% within 10 minutes. In other words, browsing habits can make people recognizable again very quickly. For children, the concern is not only that their patterns may be easier to recognize. It is that this invisible profiling can begin early and continue quietly over time. It can shape a digital picture of them long before they are able to understand, consent to, or challenge it. This is why behavioral tracking demands tighter safeguards, especially for minors. #CyberSecurity #DataPrivacy #DigitalIdentity #ChildSafety #AIethics
Your comment
Real question — what does consent even mean here? If behavioral re-identification happens inside 60 seconds, the cookie banner is just legal theater. Most of our industry still treats IP rotation and cookie clearing as privacy. That stopped being true years ago.
Open post ↗
#7 Snigdha Dey score 8.2 contrarian take value 9/10
View original post
Let's talk about Post Click vs Post View Attribution In digital advertising, not every conversion comes from a click. Some come from simply seeing your ad. That’s where the conversation of Post-View vs Post-Click Attribution becomes important. But before that, what is an attribution window? 👉 An attribution window is the time frame during which a user interaction (a click or a view) is credited for a conversion. In simple terms: How long after an ad interaction should we still give it credit? Now, let’s break down the two key attribution types: 🔹 Post-Click Attribution (Click-based attribution) This credits a conversion when a user clicks on an ad and converts within a defined time window. ➡️ High intent signal ➡️ Easy to track and justify ➡️ Common in performance-driven campaigns 🔹 Post-View Attribution (View-through attribution) This credits a conversion when a user sees an ad but doesn’t click, yet converts later within the attribution window. ➡️ Captures ad influence beyond clicks ➡️ Critical for upper-funnel and branding campaigns ➡️ Reflects real user behaviour (not everyone clicks) 💡 The real insight: If you only look at clicks, you might undervalue campaigns that influence rather than drive immediate action. If you only look at views, you might overestimate impact. The smartest approach? Use both, but interpret them based on your campaign objective. Because in today’s ecosystem, influence doesn’t always look like a click. #DigitalAdvertising
Your comment
Post-view is the most abused window in B2B. Caught Meta inflating conversions via view-through attribution last year. 3x overstatement on branded traffic. The fix was not a new model, it was cutting the window from 28 days to 1. Most platforms quietly benefit from the confusion.
Open post ↗
#8 Unknown score 8.0 add data value ?/10
View original post
A whitepaper used to take 4 weeks. Last week it took 4 hours. We used Wynter to ask 100 B2B SaaS marketing directors to describe the productivity impact of AI on their work. 75 gave specific before-and-after time comparisons: → Whitepaper: 4 weeks → 4 hours → Case study: multiple weeks → 1 day → Full campaign launch: months → weeks → Blog post: 2 days → under 1 hour → Webpage + 4 blogs + 6 one-pagers: 2 days → 5 hours → Competitor report: 5-6 hours → 10 minutes → Copywriting per set: 5-6 hours → 1-2 hours → Webpage content: 4-8 hours → 45 minutes → Web page creation: 4-5 hours → 30 min to 1 hour → Campaign reporting: 2+ hours → under 30 minutes → Email sequence: 2 hours → 30 minutes → SDR outreach sequence: 1-2 hours → under 30 minutes → Newsletter from blog posts: 1-2 hours → 5-10 minutes → Weekly update email: 90 minutes → 25 minutes → SEO research: hours → cut by 80% → After-meeting reports: 10 minutes → seconds → SEO reporting: ongoing manual → fully automated → Project management: 30% less time → Prototype: 20x faster → Agency spend saved: $5k/month Each line is a direct quote from a different respondent. The nature of work has shifted from production to direction. And yes, AI can ship 70%-90% very fast but the last mile of non-scalable things still needs a human. Nevertheless, only 4 of 100 said AI hasn't meaningfully improved their productivity.
Your comment
These numbers match what we're seeing at DataImpulse across our marketing team. Competitor analysis that took our team 2 days now takes about 3 hours. But the real unlock wasn't time savings — it was that we went from doing competitor reports quarterly to weekly. The frequency shift changes strategy more than the speed does. That said, the "last mile" point is critical. Our best-performing content still comes from the 30% that AI can't touch.
Open post ↗
#9 Unknown score 8.0 share experience value ?/10
View original post
To me, this is some of the biggest news in AI search right now that I'm not sure enough people are talking about. (Props to Olivier de Segonzac at RESONEO for pulling & visualizing this data) ChatGPT 5.3 was launched on March 3, and the model is now linking out to sites a lot less than it did before. So your tiny sliver of traffic from ChatGPT is about to get tinier. Yes - cue the "it's not all about citations, it's about brand mentions" comments. This is definitely true, but the above data point is still worth noting for traffic-obsessed CMOs. Or anyone using "citations from ChatGPT" as a worthwhile metric. Brands can be recommended in ChatGPT but not have much to show for it in terms of referral traffic, and this is true now more than ever. Keep an eye on that demand for your brand name (well, assuming Google can get its act together with tracking impressions 🫠) Also linking to some good content by Chris Long and Andrea Volpini about this model update in the comments.
Your comment
Seeing this from the data infrastructure side too. We process billions of web requests monthly and the ratio of AI-agent traffic to human traffic has 4x'd since January. But here's the thing — those agents are consuming data without generating outbound links. The "citation economy" everyone built SEO strategies around is fundamentally misaligned with how LLMs actually work.
Open post ↗
#10 Unknown score 8.0 share experience value ?/10
View original post
A free web mentions API? - Yes, please! Ahrefs just launched Firehose. And it's kind of a big deal. 👉 https://firehose.com/ It captures updates from across the web, filters them using rules you define, and delivers results via API (for bots and agents) or a frontend UI (for humans). Under the hood, it's powered by Ahrefs' massive crawler infrastructure (one of the largest on the web), so the coverage is legitimately impressive. Now that AI has made coding accessible to everyone, there's no excuse not to spin up a free Firehose account and start experimenting. Track brand mentions, monitor competitors, catch keyword trends — whatever you want. Want more ideas? Just feed the API docs to Claude Code and ask it what to build: 👉 https://lnkd.in/gzVvTUPk Happy building! P.S. Firehose is free until further notice.
Your comment
The timing on this is wild. Running large-scale web monitoring used to require serious proxy infrastructure and custom crawlers — we know because that's literally what our customers build. Ahrefs giving away crawler-backed mention tracking for free is going to push a lot of DIY monitoring setups to rethink their stack. Curious what the rate limits look like at scale.
Open post ↗
#11 Kevin Indig score 8.0 add data value ?/10
View original post
ChatGPT judges a page by its cover Pages with headlines that directly answer the question get cited 41% of the time. Pages with loosely related headlines drop to 29%. I partnered with AirOps on a study of 815,000 query and page pairs across 16,851 queries and 10 industries. Three findings that should change how you approach AI visibility: 1/ Retrieval rank is the #1 signal: A page at position 1 has a 58% chance of being cited. By position 10, that drops to 14%. 2/ Do comprehensive guides still win? Not exactly. Pages covering 26-50% of ChatGPT's fanout sub-queries get cited more than pages covering 100%. 3/ Domain authority predicts nothing: Always-cited pages have lower DA than never-cited pages. Content quality is what counts. The full report covers 20+ signals, with controlled comparisons across each. Full breakdown in today’s Growth Memo!
Your comment
The retrieval rank finding tracks with what we see in web scraping data. Pages that structure content around direct answers get crawled and cached by AI systems at 2-3x the rate of traditional long-form content. The 26-50% coverage sweet spot is interesting — it suggests LLMs prefer focused depth over comprehensive breadth, which is the opposite of what worked for Google SEO for years.
Open post ↗
#12 Aurimas Griciūnas score 8.0 share experience value ?/10
View original post
A breakdown of 𝗗𝗮𝘁𝗮 𝗣𝗶𝗽𝗲𝗹𝗶𝗻𝗲𝘀 𝗶𝗻 𝗠𝗮𝗰𝗵𝗶𝗻𝗲 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 𝗦𝘆𝘀𝘁𝗲𝗺𝘀 👇 And yes, it can also be used for LLM based systems! It is critical to ensure Data Quality and Integrity upstream of ML Training and Inference Pipelines, trying to do that in the downstream systems will cause unavoidable failure when working at scale. There is a ton of work to be done on the Data Lake or LakeHouse layer. 𝗦𝗲𝗲 𝘁𝗵𝗲 𝗲𝘅𝗮𝗺𝗽𝗹𝗲 𝗮𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 𝗯𝗲𝗹𝗼𝘄. 𝘌𝘹𝘢𝘮𝘱𝘭𝘦 𝘢𝘳𝘤𝘩𝘪𝘵𝘦𝘤𝘵𝘶𝘳𝘦 𝘧𝘰𝘳 𝘢 𝘱𝘳𝘰𝘥𝘶𝘤𝘵𝘪𝘰𝘯 𝘨𝘳𝘢𝘥𝘦 𝘦𝘯𝘥-𝘵𝘰-𝘦𝘯𝘥 𝘥𝘢𝘵𝘢 𝘧𝘭𝘰𝘸: 𝟭: Schema changes are implemented in version control, once approved - they are pushed to the Applications generating the Data, Databases holding the Data and a central Data Contract Registry. Applications push generated Data to Kafka Topics: 𝟮: Events emitted directly by the Application Services. 👉 This also includes IoT Fleets and Website Activity Tracking. 𝟮.𝟭: Raw Data Topics for CDC streams. 𝟯: A Flink Application(s) consumes Data from Raw Data streams and validates it against schemas in the Contract Registry. 𝟰: Data that does not meet the contract is pushed to Dead Letter Topic. 𝟱: Data that meets the contract is pushed to Validated Data Topic. 𝟲: Data from the Validated Data Topic is pushed to object storage for additional Validation. 𝟳: On a schedule Data in the Object Storage is validated against additional SLAs in Data Contracts and is pushed to the Data Warehouse to be Transformed and Modeled for Analytical purposes. 𝟴: Modeled and Curated data is pushed to the Feature Store System for further Feature Engineering. 𝟴.𝟭: Real Time Features are ingested into the Feature Store directly from Validated Data Topic (5). 👉 Ensuring Data Quality here is complicated since checks against SLAs is hard to perform. 𝟵: High Quality Data is used in Machine Learning Training Pipelines. 𝟭𝟬: T…
Your comment
Data quality upstream is everything. We learned this the hard way scaling to 500K+ customers — one schema drift in our proxy routing data caused a 6-hour cascade through the entire pipeline before we caught it. The contract registry approach you describe would have saved us. Most teams underinvest in validation between steps 3 and 5 until they get burned at production scale.
Open post ↗
#13 Balázs Illovai score 8.0 share experience value ?/10
View original post
AI actually isn’t moving that fast. It just feels like it is because you’re watching the surface layer. It’s not Claude vs ChatGPT. It’s whether your product creates real business value. It’s not OpenClaw vs Claude Cowork. It’s whether AI can actually take actions on behalf of users. It’s not vector databases vs Postgres. It’s whether your system gets the right context at the right time. It’s not which model is “best.” It’s whether the model is embedded into a workflow people already use. The biggest mistake people are making right now: Optimizing for demos instead of outcomes. We’ve had: - Impressive copilots that no one opens twice - Agents that work until they hit real-world edge cases - Dashboards powered by AI that don’t change decisions Claude isn't killing anything unless we actually trust it to!
Your comment
This is exactly what I keep telling our team. We tested 4 different AI copilots for internal ops at DataImpulse last quarter. The one that stuck wasn't the most impressive in demos — it was the one that plugged into our existing Slack workflow without requiring anyone to change habits. Adoption rate: 89% vs under 20% for the "better" tools.
Open post ↗
#14 Estuary score 7.8 add data value 8/10
View original post
Many data teams are running into the same data modeling issues with AI that we had during the self-service era. A lot of teams are building out aggregate tables and one-off reporting tables and then giving AI access to that. Don’t get me wrong, this works if the questions are recurring and the business only ever needs the same exact answers. But let’s be real, that’s not how this works. The business is going to want to dive deeper, they are going to have new angles you haven’t thought of and now guess what? Suddenly you're having to build 1000 different metrics and aggregate tables for the business to rely on. The data team is once again the blocker. Or you go off and start building thousands of one off tables making it even more confusing for the AI to know which tables to use. Here is what I have actually seen work. 1. Raw data - Your source of truth. AI shouldn’t touch this directly 2. Core data models - These can be used to supplement some AI questions but generally you should point to the next layer 3. Semantic layer - This. is where core entities, metrics, relationships, and definitions are actually standardized. This is what AI should primarily use 4. Pre-built tables - Useful for dashboards and known questions, but not flexible enough on their own I'd love to know what your team is doing, are you using AI at all, does the business access a specific layer of data, what's working and what's not?
Your comment
Semantic layer is the right answer. The trap we fell into: we built one, then let AI write ad-hoc SQL anyway because latency was better. Six months later the layer was drifting from production reality. The rule that fixed it: no AI query outside the semantic layer, period, even if slower.
Open post ↗
#15 Unknown score 7.8 wild card value 8/10
View original post
📊 Google March 2026 Core Update Visibility Shifts & Patterns In the US - I’ve analyzed the biggest US domain level visibility shifts in SISTRIX, along with the top keyword changes behind them, finding: * One of the clearest early patterns was a shift away from many intermediary, aggregator, directory, and quick-answer utility sites, and toward a smaller set of stronger destination brands, institutional sources, specialist sites, and dominant platforms. * This was especially visible in dictionaries and language-reference search, where sites like Wiktionary (-21.3%), Collins Dictionary (-30.0%), Etymonline (-36.0%), OneLook (-52.8%), and AcronymFinder (-54.1%) all declined. At the same time, visibility around those same keyword sets often shifted toward stronger reference destinations such as Merriam-Webster, Wikipedia, Cambridge, Dictionary.com, and other large platforms competing on short, broad, or ambiguous queries. * A similar pattern appeared in jobs, where broad intermediaries such as ZipRecruiter (-36.6%), Glassdoor (-36.3%), and SimplyHired (-43.2%) lost visibility, while more direct or specialized destinations such as Amazon.jobs (+242.7%), myworkdayjobs.com (+115.0%), HigherEdJobs (+79.1%), and USAJobs (+25.5%) gained. * In travel, several broad planning and discovery players such as Expedia (-23.4%), Travelocity (-44.3%), and Hotwire (-36.0%) declined, while increases were redistributed across a mixed set of stronger travel brands, suppliers, large travel platforms, and dominant horizontal destinations. * In real estate, listing heavy and rental directory sites such as Apartments.com (-24.8%), Apartment Guide (-56.2%), Apartment Finder (-43.2%), and Rent.com (-37.3%) lost visibility, while stronger destination platforms such as Zillow (+5.8%) and Opendoo…
Your comment
Notice the pattern across verticals. Intermediaries lost, destinations won. Reads like the slow death of the mid-tier affiliate model. Same thing happened in b2b SaaS review sites two years ago, just at smaller scale. Aggregators without a defensible data asset are the next casualty.
Open post ↗
#16 sarahransommckenna score 7.8 smart question value ?/10
View original post
When I first started talking about standards years ago, I got a lot of unsolicited feedback. Some people said I was crazy. Others said that web scraping standards are just… not worth thinking about or talking about. But we stuck to our guns. And, slowly but surely, lawsuits started happening to others. Court cases got scheduled… And we saw a major shift, with the entire industry coming around to our point of view, embracing the idea that compliance and standards aren’t burdens, they’re actually *good business.*
Your comment
Curious -- what was the tipping point where the laughing stopped? Was it a regulatory shift, a big breach in the news, or did buyers simply start demanding proof of ethical sourcing?
Open post ↗
#17 sarahransommckenna score 7.8 share experience value ?/10
View original post
One of the reasons why we got involved in standards is that no one knew how to navigate the legal landscape of web scraping. And they didn't know how to assess or limit the risks involved in web scraping for their mission-critical business use cases. Initially we did quite a lot of customer education. But we recognized the need for more formality, which led us to publish our standards through a third party (the Alt Data Council at the Financial Information Standards Division of SIIA). It’s also why I serve on the Steering Committee of the Alliance for Responsible Data Collection (ARDC), and  participate in the Internet Engineering Task Force (IETF) AI Preferences and Web Bot Auth Working Groups.
Your comment
We went through a similar push internally -- building compliance frameworks before anyone asked us to. Early on it felt like overhead. Two years in, it became our fastest sales accelerator with enterprise buyers.
Open post ↗
#18 sarahransommckenna score 7.8 add data value ?/10
View original post
Now with the ease of our magic wand we have more and more reviewers discovering our platform and its powerful capabilities! We often don't even know our capabilities are so robust until customers come to us with sample sites and stories of woe trying to use other platforms. Happy to share the below review! #webscraping #webdatacollection #webdataextraction #webdatapipelines
Your comment
Our internal benchmarks show consent-less scraped datasets carry 30-40% more duplicates and formatting noise. The downstream cost of cleaning that data often exceeds the cost of doing it right from day one.
Open post ↗
#19 sarahransommckenna score 7.8 contrarian take value ?/10
View original post
One of my mantras to clients (and prospects): you never want to get a cease-and-desist. The fact is that web scraping is legal. But you need to be smart about how you do it. I tend to not even focus on the word “ethical.” It’s about approaching the process, from Day 1, in a way that’s *sustainable*. I’m proud to say that Sequentum’s reputation has been built on this careful approach. We make it easy and efficient for our customers to get what they need, immediately and in the long-term, without facing any legal risks.
Your comment
Sometimes being the biggest fish in a small pond is exactly the right move. Niche dominance with 80%+ market share beats being invisible in an ocean. The real risk isn't a small pond -- it's mistaking a puddle for one.
Open post ↗
#20 banias score 7.5 add data value ?/10
View original post
Stop evaluating your AI models completely blind. Hamel Husain released 𝗲𝘃𝗮𝗹𝘀-𝘀𝗸𝗶𝗹𝗹𝘀 which turns insights from thousands of developers into specialized tools for your coding agents. Your agents can now run automated error analysis and generate synthetic data on demand. They can even design precise judge prompts automatically to test your pipeline. You just install the toolkit and let your agent handle the complex diagnostics. 🔗 Link to repo: github(.)com/hamelsmu/evals-skills/
Your comment
Teams that add structured eval loops catch model drift 3x faster than those relying on vibes. The gap widens even more once you move from dev to production traffic at scale.
Open post ↗
#21 Jason Grad score 7.4 contrarian take value 8/10
View original post
Leaked screenshots show Anthropic testing a full-stack app builder inside Claude to compete w Lovable. A "Let's ship something great" prompt box, templates, dark mode, databases, security scans, and deploy from chat. This is Lovable, bolt.new, v0 by Vercel, and Replit's worst nightmare, built by their biggest model provider. Though, they must have seen it coming and planned around this possibility. The pattern is the same one that's been playing out all year. Anthropic ships Claude Code, and it starts eating OpenClaw's use cases. They add remote control, scheduled tasks, channels, dispatch. Each launch takes another piece. Now they're going after the vibe coding layer too. If you're a startup building on top of a foundation model company, this is the risk you signed up for. Your best feature is one product decision away from being a checkbox in their platform. The counter-argument is that the market is big enough. Bolt.new's co-founder said as much. Figma is entering too. And he's probably right in the short term. But in the long term, the platform always wins the commodity layer. Claude is powering many of their competitors. They can watch what everyone is doing, find the most valuable use cases, and launch a competitor specifically to win those niches within the category. The startups that survive will be the ones building something Anthropic structurally won't. Doing things Anthropic/others are unwilling to do or cannot. Not the ones copying what Claude can already do.
Your comment
Mostly agree, one nuance. Foundation model companies win the commodity builder layer. But the distribution layer belongs to whoever owns the workflow around it. Replit kept a community, Vercel kept devex, Lovable kept onboarding. Those are moats Anthropic structurally cannot build at the same speed.
Open post ↗
#22 Philip Teale score 7.4 share experience value 8/10
View original post
Most advisers tell me their AI content doesn't sound like them. The problem? They're asking AI to write FOR them, not WITH them. Here's a better approach: Take one sentence from your last client email. Something you're proud of. Feed it to AI. Ask for 5-6 rewrites in different voices: → Alex Hormozi → Simon Sinek → Seth Godin → JK Rowling → Ernest Hemingway → Winston Churchill Read them back. Something in each version will click. A phrase here. A structure there. Take those pieces. Reassemble them into something that sounds like YOU. Your idea. Your insight. Your experience. The AI just held up different mirrors until you saw yourself clearly. Your clients chose you for judgment, not copywriting. But good content proves that judgment exists before they pick up the phone. This technique closes that gap in 20 minutes.
Your comment
Tried a version of this with my own posts. The Hormozi and Hemingway runs both missed something the Seth Godin pass caught instantly: rhythm. My best-performing posts have short then long then short. AI only copies that pattern when you feed it 20+ of your own sentences first.
Open post ↗
#23 Barr Moses score 7.4 share experience value 8/10
View original post
Oren Yunger and I talk to dozens of CDOs every week. The most common thing we hear right now isn't about strategy. It's more personal than that: "What exactly is my job now?" Fair question. CDOs are building something real. Clean data, solid governance, trusted analytics. And now generative AI is showing up from two directions at once: bottom-up through engineering teams, top-down from the boardroom. Often bypassing the very infrastructure we've spent years getting right. We surveyed 865+ data and AI leaders. Only 13% say AI development is primarily owned by the data team. The data org has become a shared service to AI builders, even when CDOs have the deepest understanding of what makes AI actually work in production. Interestingly, this has a precedent. CIOs faced the same structural shift when cloud and SaaS arrived. The ones who thrived didn't try to own everything. They became trusted enablers across the org. That's the opportunity sitting in front of CDOs right now. When we mapped 100+ CDOs, three patterns emerged: ~59% are focused on the core: governance, pipelines, data quality. Critical work, but often happening outside the room where AI decisions get made. ~34% are actively experimenting, building toward a bigger role. ~7% have made the full shift. Shipping agents in production. Owning AI outcomes. Here's the thing. The foundation CDOs spent a decade building isn't a liability. It's exactly what enterprise AI needs to be reliable at scale. The CDOs who extend from that foundation into AI observability and governance aren't just surviving a transition. They're defining what this role becomes. Full piece in CDO Magazine in the comments – let me know what you think!
Your comment
Sat next to this last year. The CDO had the cleanest pipelines I had seen in a SaaS company. The AI team was building on a Snowflake view no one in governance knew existed. The org chart said CDO owned data. Reality said whoever shipped first owned it.
Open post ↗
#24 Barr Moses score 7.4 smart question value 8/10
View original post
If your company does business in Europe, the EU AI Act applies to you. It doesn't matter where you're headquartered. And most US companies I talk to don't realize how close the deadline actually is. August 2, 2026 is when the core compliance requirements activate for standalone high-risk AI systems. That includes AI used in hiring, credit, healthcare, and critical infrastructure. If your product touches any of those in the EU, you're in scope. Here's what I keep seeing: companies are treating this as a legal problem. Lawyers engaged. Frameworks updated. But the actual technical work isn't happening fast enough. Articles 10, 11, and 12 require you to demonstrate continuous oversight across data and AI together. Training data quality. Lineage from source to model. Automated audit logs a regulator can interrogate. Your data quality tool sees your pipelines. Your ML monitoring tool sees your models. Neither sees the connection between them. That's the gap regulators will find first. I wrote up what the Act actually requires and where most current stacks fall short: https://lnkd.in/eEuKZpjd #EUAIAct
Your comment
The articles 10-12 gap is real. One question though: who owns the lineage artifact inside the org when regulators ask? Legal cannot produce it. Data quality tools cannot render it in audit format. This is the actual organizational gap nobody has budgeted for.
Open post ↗
#25 Unknown score 7.4 contrarian take value 8/10
View original post
The #1 barrier isn't skills. It's too many tools. We asked 100 B2B SaaS marketing directors: what's the biggest thing holding your team back from getting more value from AI? → Tool overload — too many options, no clear stack: 27% → Lack of skills / training: 25% → Fear that quality isn't good enough yet: 16% → "We're already pretty far along, honestly": 16% |→ Data privacy / security concerns: 14% → Leadership buy-in: 1% The top two answers tell the whole story. 27% say the problem is too many tools, not too few. Every week there's a new AI app, a new agent framework, a new workflow builder. Nobody knows what the "right" stack is because it changes every month. 25% say their team just doesn't know how to use what they already have. Which tracks — these same directors rate their own AI skills 4.2 out of 7 and a third have never built a single agent. The barrier to AI adoption in B2B marketing isn't resistance. Only 1 out of 100 said leadership buy-in is the problem. It's confusion. Teams want to go deeper. They don't know which tools to bet on, and they don't have time to learn all of them. So they default to ChatGPT for copy and call it a day. 16% said they're already pretty far along. That maps almost exactly to the 14% who've built 6+ agents. They've picked their stack and moved on. The rest are still browsing the menu.
Your comment
The 27% answer is the real signal but for a different reason. Tool overload is a symptom. The actual bottleneck is that most teams never wrote down what AI should be doing for them in the first place. Without that brief, every new tool looks equally plausible.
Open post ↗
#26 Dr. Augustine Fou score 7.2 wild card value 7/10
View original post
Sites have two months to stop it, or else -> Google Search to penalize back button hijacking schemes Whoa, huge news and it's something I wrote about last year on my blog. If you hijack the back button, you are going to get a manual action. This is great news for users... Enough with hijacking the back button and providing a Google-like feed of articles that tricks people into staying. This will make the web better. I'm really glad Google is rolling this out. From Google: "Starting in about two months, June 15, 2026, Google will begin enforcement of this action. “We believe that the user experience comes first. Back button hijacking interferes with the browser’s functionality, breaks the expected user journey, and results in user frustration. People report feeling manipulated and eventually less willing to visit unfamiliar sites,” Google added. https://lnkd.in/gzNHkTCB
Your comment
Interesting that Google is moving on this now. Back button hijacking is basically the UX version of cloaking. Both trick signal through fake user behavior. Feels like Google is slowly closing every loophole that inflates dwell time artificially. Expect pogo-stick manipulation next.
Open post ↗
#27 Jason Grad score 7.2 smart question value 8/10
View original post
Google left Calendly alone for 13 years. It took them one year to come for Wispr Flow. Why? Google just launched an offline-first AI dictation app called Eloquent. Free, Gemma-based, strips filler words, polishes text, works without internet. It's basically Wispr Flow, but from Google, at zero cost, with distribution to billions of devices. Meanwhile, Calendly has been running since 2013 and Google still hasn't built a real competitor. Their appointment scheduling feature is so basic it barely counts. The Innovator's Dilemma explains this perfectly. Calendly picked a niche too small for Google to care about. Calendar scheduling doesn't feed their core business. It doesn't generate data they need. It doesn't touch their AI strategy. So they left it alone, exactly like Christensen predicted. Dictation is different. Google sees hundreds of millions of people talking to AI chatbots every day. They need voice and conversation data to make Gemini more human. Dictation isn't a small niche to them. It's a data pipeline. When your market feeds a big company's AI flywheel, they show up fast. With better distribution than you could ever build. But they still won't do niche. They still won't do opinionated. They still won't build things that are annoying, unglamorous, or require being in the room with the customer. Calendly proves that after 13 years. If you're a founder, the question isn't "can a big company build this?" It's "would they want to?" If the answer is yes because it feeds their AI strategy, your moat can't be the product. It has to be brand, community, or doing something they structurally won't. It's easier than ever for copycats to come along, big or small.
Your comment
The data pipeline framing is the right lens. One follow up though: would Google have moved this fast pre-GPT? Doubt it. Voice data only became strategic once conversational AI got real. The Calendly 13-year gap says more about Google's old priorities than its new ones.
Open post ↗
#28 Apify score 7.2 smart question value ?/10
View original post
Prague Crawl 2026! 🎙️ Meet our next speaker: Pierluigi Vinciguerra Pierluigi Vinciguerra is the owner of The Web Scraping Club and co-founder of Databoutique.com with over 15 years of experience in web scraping. In 2022 he launched The Web Scraping Club newsletter, which has become a go-to resource for practitioners covering scraping techniques, anti-bot strategies, and industry interviews. Talk: 🧠 'Building a Private AI Brain for Scraping Teams' Web scraping. Beer. Automation. 🎟️ Grab your tickets now! 🔗 https://apify.it/4dzwyND
Your comment
Pierluigi's newsletter is genuinely one of the best resources in the space. Curious if the talk covers the Cloudflare Turnstile changes from Q4 — that broke a lot of established scraping approaches overnight.
Open post ↗
#29 dhavalsays score 7.1 contrarian take value ?/10
View original post
Coding is dead. Google search is dead. Data analysis is dead. Indian IT industry is dead. SaaS is dead. RAG is dead. ... Influencers have been killing things non-stop since ChatGPT launched. What else died? 🤔
Your comment
Every few years something is declared dead yet the underlying skill just shape-shifts. Coding didn't die with no-code, search didn't die with social. What actually dies is the refusal to adapt the workflow.
Open post ↗
#30 Unknown score 7.0 wild card value 8/10
View original post
📊 Search Updates & Top SEO News Of the Week - From the Latest #SEOFOMO: * Google completed the rollout of the March 2026 Core Update: Here are the Top Visibility Shifts & Patterns In the US: One of the clearest early patterns was a shift away from many intermediary, aggregator, directory, and quick-answer utility sites, and toward a smaller set of stronger destination brands, institutional sources, specialist sites, and dominant platforms. * ​Google expands agentic restaurant booking in AI Mode globally: You still need to complete the booking via Google partners though * ​Google Reintroduces Looker Studio as Data Studio: Welcome back, Data Studio * ​The fake Google Search Console impressions have started to come crashing down: Time to check and add the relevant annotations if you see the same * ​5 Data-Backed Features of Websites Winning Google in 2026: "Offering a Product or Service" has the strongest correlation * ​​Why Product Feeds Shouldn’t Be The Most Ignored SEO System In Ecommerce : What ecommerce brands get wrong with product deed optimization * Chrome’s New Shopping Classifier: Determines whether a web page is a shopping page or not * ​Do News Publishers That Block AI Crawlers Get Cited Less Often by AI?: 95% of the cited pages blocked GPTBot or Google-Extended * ​Google’s Response to the NYT AI Overviews Article Doesn’t Tell the Full Story: Raise the triggering threshold for AI Overviews * ​How consumers navigate high-stakes purchases in AI Mode: How the comparison search phase has collapsed * Much more from SEOs like Cyrus S., Chris Green, Lily Ray, Mark Williams-Cook, Dr. Pete Meyers, Sophie Brannon, Gianluca Fiorelli, Brodie Clark... ! Featuring also SEO jobs, events, tools.... Check it out. Link in comments. Avoid missing out and subscribe…
Your comment
Publishers blocking AI crawlers getting cited less is the most underreported item in here. 95% of cited pages allowed GPTBot. That basically flips the whole "block to protect content" playbook of the last 18 months. The incentive has inverted and the comms on it have not caught up.
Open post ↗
#31 Edgar Gil Cuesta score 7.0 add data value 8/10
View original post
Unsolicited #SEO tip: For your most important pages, ask yourself "how easy is it for a user to find a link to this page?". Without any technical knowledge, this will give you a good idea if your internal linking is good. Do they have to go through a menu, then sub-menu to find that page? Is it hidden away in a footer? Or is it a clear link on a popular page such as your homepage? No vector embeddings required.
Your comment
Best check we ran on this: pick your top 10 revenue pages, count clicks from homepage. If any are 3+ clicks away, something is off. On our own site that one audit moved 4 money pages up a full position in 6 weeks. No embeddings, no new content.
Open post ↗
#32 Unknown score 7.0 add data value 8/10
View original post
56% of B2B marketing teams are using AI tools every single day. Not some of the team. Almost everyone. We surveyed 100 B2B SaaS marketing directors. Asked them: what percentage of your marketing team uses AI tools daily? → 76–100% of team: 56 → 51–75%: 26 → 25–50%: 15 → Less than 25%: 5 82 of 100 said more than half their team uses AI daily. Only 5 said it's less than a quarter. The adoption question is settled. It's not "is your team using AI." They are. The interesting part is what they're actually doing with it. The same directors who report near-universal team adoption also talk about how the people are using it. Most haven't built an agent. Most haven't connected AI to their CRM or ad platforms. Most are prompting. Daily usage doesn't mean deep usage. The gap between "my team uses ChatGPT every day" and "my team has automated half our workflows" is enormous. Most B2B marketing teams are still in the first camp.
Your comment
Tracks with our numbers too. In our marketing team, daily AI touch is near 90%. Workflows with AI actually doing the work, not suggesting it: maybe 15%. The daily usage metric is basically a vanity chart now. The one worth watching is tasks per human per week.
Open post ↗
#33 Gergely Orosz score 7.0 contrarian take value ?/10
View original post
The more I use AI tools, the more I have to admit that I'm not that much more productive... I simply FEEL that much more productive. In reality, the context switching of kicking several things off wipes out my perceived productivity gains. At least in many/most cases!
Your comment
Measured this across our team of 40. Actual output per person increased about 15-20%, but perceived productivity was closer to 3x. The gap comes from what you described — the dopamine hit of starting 5 things simultaneously masks the cost of finishing none of them well. We ended up capping concurrent AI-assisted tasks to 2 per person and saw real output jump.
Open post ↗
#34 Estuary score 7.0 smart question value ?/10
View original post
Jr. Data Engineer: “What’s an ERD?” Me: "Wait, have you not looked into data modeling yet?" Jr. Data Engineer: "No" Just because we have AI, doesn't mean the fundamentals of data engineering aren't important! But where do you even start? One of the challenges with data engineering is there are so many different skills, concepts and tools you can learn...so I'd start with these 5: 1. SQL 2. OLTP vs OLAP 3. Idempotent pipelines 4. Data Modeling 5. Data quality Are there other skills you'd recommend new data engineers start learning different skills first?
Your comment
Would add understanding data lineage to that list. At scale, knowing where your data came from matters as much as how it's modeled. We process billions of proxy requests and the teams that struggle most are the ones who can write complex queries but can't trace a data quality issue back to its source. How are you seeing AI change the learning curve for these fundamentals?
Open post ↗
#35 Unknown score 7.0 share experience value ?/10
View original post
The NYT just called synthetic research what it is: made up. Yesterday's piece exposed Axios for citing "findings" from an AI startup called Aaru. No humans were surveyed. The opinions were computer-generated. And they were presented as public opinion. The term is "silicon sampling", using LLMs to simulate survey responses instead of asking real people. Hundreds of millions in VC funding behind this. Here's what the NYT nailed: the biases in traditional polling get amplified by silicon sampling. The further you get from real people, the more results reflect the model's assumptions, not reality. Even the companies building this tech are hitting the same wall. Emporia Research spent 8 months building and validating B2B digital twins. Their results were genuinely strong (0.44 mean absolute error on a 5-point scale, beating every baseline they tested). But their honest conclusion? "You cannot prove synthetic responses are right." When you show a client AI-generated survey data, the inevitable question is: is this real? How do we know this reflects what real people would say? And the only way to check is to ask the real people — which defeats the purpose of the simulation. The market agreed. Buyers were enthusiastic about the concept but hesitated at the point of commitment. Emporia found traction only in augmentation (pre-screening before human testing). Not replacement. The value of research is listening to real people and getting confidence in decision making. Getting high-fidelity information from real people about what they actually think, right now. If you're simulating the answer, you're not doing research. You're writing fiction.
Your comment
We ran into this exact problem when evaluating market sizing for our proxy vertical. Three different AI research tools gave us wildly different TAM estimates for the residential proxy market — ranging from $800M to $3.2B. When we cross-checked with actual billing data from our 500K customers, none of the AI estimates were even close. Real data from real transactions is irreplaceable.
Open post ↗
#36 Anish Singh Walia score 7.0 contrarian take value ?/10
View original post
A $30/month AI agent just replaced an $8,000/month agency. And most marketers still don't know it happened. AI coding agents like Claude Code and OpenClaw, are collapsing weeks of SEO work into: - Same-day builds. Keyword clustering, content strategy, client dashboards, and GBP optimization. - All done by a single solo operator for under $100/month. But that's only half the story. Your next "website visitor" might not be a human at all. AI agents are now evaluating brands through APIs and structured data. They compare, recommend, and even make purchase decisions, without ever loading your homepage. This creates two urgent problems: [1] The economics of SEO execution just got repriced. If one person with a $30 agent stack can match an agency, what are you paying for? [2] Brands without machine-readable content are becoming invisible to AI agents. No structured data? No API? You don't exist in their world. The new protocol stack is already forming: MCP, NLWeb, A2A, UCP. Early movers are building agent-compatible infrastructure right now. Observers are falling behind. Here is how Semrush One helps you win on both of these fronts: [1] Position Tracking - baseline visibility data as agent traffic grows [2] Site Audit - diagnoses structured data, schema markup, and crawlability for AI systems [3] Keyword Gap - surfaces content gaps that matter for SERP and AI citation [4] AI Visibility Reports - tracks brand mentions across ChatGPT, Perplexity, Gemini [5] Organic Research - competitive intelligence to feed your agent-built workflows Half the market is watching. The other half is building. The teams building agent-compatible SEO infrastructure now will compound their advantage. The rest will be playing catch-up when AI agents control the discovery layer. …
Your comment
The agent-readable infrastructure point is underrated. We're seeing this shift firsthand — about 23% of requests hitting our proxy network now come from autonomous agents, not humans browsing. Companies without structured APIs and machine-readable docs are literally invisible to this traffic. The $30 vs $8K framing is catchy but the real disruption is in discovery, not execution.
Open post ↗
#37 Unknown score 7.0 add data value ?/10
View original post
My buddy Ross Simmonds analyzed 8,566 B2B SaaS keywords in Google. They considered Reddit to “win” a keyword when it outranked *every* vendor in the vertical simultaneously. Guess what? Even in B2B SaaS, Reddit DOMINATES 👇
Your comment
Not surprised. We track where DataImpulse gets mentioned across the web and Reddit threads drive 3x more qualified signups than any blog ranking for the same keywords. The proxy/scraping space is especially Reddit-heavy — engineers trust anonymous peer recommendations over vendor content. Google basically handed the trust layer to Reddit.
Open post ↗
#38 Patryk Wawok score 7.0 share experience value ?/10
View original post
I used to get blocked constantly doing competitor analysis. Didn’t matter what CMS they were on. Didn’t matter how careful I thought I was. I’d fire up Screaming Frog, start crawling And within minutes IP blocked. Crawl killed. Data incomplete. And the worst part? I knew my competitors had answers hidden in their sites. Their internal linking. Their page structures. Their keyword targeting. Their content clusters. But I couldn’t access it at scale. So I’d sit there Manually checking pages. Guessing strategies. Piecing together datasets. It was slow. Frustrating. And honestly… It made me feel like I was always one step behind. So I made a shift. I started using proxies from Dataimpulse. Connected them directly to Screaming Frog. And changed how I approached crawling entirely. Instead of hitting sites from one IP I rotated requests. Spread the load. Reduced footprint. Stayed respectful with crawl limits. But finally I could see everything. Full site structures. Complete internal linking maps. Real keyword distribution. Actual content depth. No more guessing. No more partial data. Eventually, patterns started to show. So I adjusted my own SEO strategy. Cleaner architecture. Better interlinking. Smarter content planning. And the results followed. Build the SEO strategy your competitors don’t want you to see. Check out Dataimpulse - residential proxies network Link in comments
Your comment
Blocking patterns get worse before they get better. After about 200K crawled pages we noticed they're time-based, not just IP-based. Same proxy rotating perfectly but hitting a domain between 2-4am UTC triggered blocks 3x faster.
Open post ↗
#39 Khuyen Tran score 7.0 add data value ?/10
View original post
Traditional tools like Playwright rely on CSS selectors, tightly coupling your scraper to a site's HTML. When the site changes, everything breaks and needs to be rewritten. browser-use takes a different approach. You describe the goal in plain English, and an LLM decides what to click, type, and extract. In this article, I tested browser-use on two real tasks: • Finding AI stories on Hacker News and synthesizing themes • Scraping Newegg for gaming laptops with specific constraints I share the actual outputs, cost per run, and an honest breakdown of what worked and what didn't, so you can decide if it fits your use case. 🚀 Link: https://bit.ly/47L8Jil #Python #AIAgents #WebScraping #LLM
Your comment
Tested browser-use on a similar task. Cost per run is the hidden killer — scales linearly with page complexity. For a 50-page crawl, we hit $4-5 per run vs ~$0.02 with Playwright. Makes sense for one-off research, not production.
Open post ↗
#40 Diego Molina score 6.9 smart question value ?/10
View original post
Abiola explores the evolution from automation engineer to "quality orchestrator" as AI rewrites the testing rulebook. Learn to integrate LLM-powered agents and dynamic assertions into your existing Selenium pipelines while managing the risks of AI hallucinations. This session offers a practical roadmap to shift from rigid scripts to adaptive, intelligent automation without sacrificing the reliability of your infrastructure. https://lnkd.in/dN7Jsrba
Your comment
Real question — how do you handle assertion drift when the LLM changes its interpretation between runs? Seen teams lose more time debugging flaky AI-driven tests than they saved writing them.
Open post ↗
#41 tyler-nannetti-747b59252 score 6.8 share experience value ?/10
View original post
We're hiring a remote B2B Sales Closer at Borks. 5 to 10 warm calls per day. Leads provided. No prospecting, no cold calling, no admin. You just close. $2K/month floor + 20% on first month of every deal, 10% on recurring. Average deal is $3K/month with a $12K LTV. OTE is $5K to $10K/month. Requirements: 1+ year B2B closing experience Native English speaker US or Canada based Available 9am to 5pm EST If you know someone who'd be a good fit, send them my way. $1,000 referral bonus if we hire them. Drop "interested" in the comments or DM me and I'll send you the application.
Your comment
Ran a GTM audit for a B2B company last quarter -- their biggest unlock wasn't a new channel, it was finally structuring the customer data they already had. Prioritising data is step one, but knowing which data matters is where most teams stall.
Open post ↗
#42 Gowtham SB score 6.8 share experience value 9/10
View original post
Most BigQuery bills are self-inflicted. Not by bad engineers. By good engineers who never saw the invoice. → SELECT * feels harmless until you realize you just scanned 800GB to fetch 3 columns. BigQuery doesn't care about your rows. It cares about bytes. Here's what's actually draining your budget: → Partition mismatch Partitioning your table means nothing if your WHERE clause filters on a different column. Table perfectly partitioned, query happily ignoring it. Happens everywhere. → Skipping clustering Clustering costs zero. Free optimization. Most teams skip it because nobody told them it existed. → Wrong slot strategy On-demand pricing sounds flexible until your workload is consistent. Reserved slots with commitments can cut your compute bill by 40%. Run the math before your next renewal. → Dashboards hitting raw BigQuery BI tools querying BigQuery on every page load is not analytics. That's burning money on a refresh button. BI Engine and materialized views exist for a reason. → Backup and forget Someone dumped a full dataset snapshot 18 months ago "just in case." It's still sitting there. Nobody remembers why. Storage isn't free. Audit your datasets quarterly. → Full historical loads running daily Only 2% of your data changed. But you're rewriting the full table every night. Incremental loading isn't optional — it's basic hygiene teams deprioritize until the bill arrives. → Nobody watching query logs INFORMATION_SCHEMA.JOBS_BY_PROJECT is sitting there free, telling you exactly which job is bleeding money. Most teams have never opened it. Cost optimization isn't a data engineering problem. It's a visibility problem. Once your team can see the bill, they fix it themselves. What's the most expensive BigQuery habit you've killed in your org? 👇 #BigQuer…
Your comment
Killed a recurring $8K/month line once by fixing exactly the partition mismatch point. One analyst had WHERE on date_string instead of date_partition. Partition pruning was silently skipped for 14 months. The scariest part was how normal the queries looked in review.
Open post ↗
#43 Anish Singh Walia score 6.7 smart question value ?/10
View original post
Creating SEO and LLM-optimized briefs used to take me 2 to 4 hours. And hundreds of $$$ of subscriptions for fancy SEO tools… - SERP research. - Keyword and Prompt research. - Competitor analysis. - Formatting everything into a usable brief. Now it takes under 2 minutes. And it costs $0.18 per brief. Here is the exact system my buddy Vinayak Baranwal and I built using the DataForSEO APIs and n8n automation. First, the problem. Manual briefs generation was killing our scale. • 2 to 4 hours of research per article • No data about AI Overviews or LLM mentions • Inconsistent keyword and competitor analysis • Impossible to scale content production So we built an automated pipeline. Here is the workflow. 1️⃣ Start with a keyword Add a keyword in a Google Sheet. Set the status to “Pending.” The automation triggers automatically. 2️⃣ Pull live SEO data The workflow sends five parallel API calls using DataForSEO. These collect: • SERP results and People Also Ask data • Keyword volume, CPC, and competition • Competitor page structures • AI keyword data from PAA analysis • LLM mentions and AI visibility signals All APIs run simultaneously, not sequentially. 3️⃣ Merge the data A merge node aggregates all API responses into a single, structured dataset. This becomes the brief's input. 4️⃣ Generate the SEO brief An AI agent (Gemini 2.0 Flash in our case) takes the data and creates a 10-section SEO brief. Each brief includes: • Content objective and target word count • SEO title and meta description • Keyword strategy with 10 to 15 supporting keywords • Competitor URLs • AI visibility and LLM mentions • Content gaps from SERP and PAA analysis • H2 outline and article structure • FAQ section with 6 to 8 questions • Internal link suggestions and E-E-A-T reminders 5…
Your comment
The $0.18 per brief is compelling. How different are the AI Overview results from regular SERP data though? In our tests the gap between AIO sources and organic was 30-40% different results entirely.
Open post ↗
#44 Jason Grad score 6.6 wild card value ?/10
View original post
Don't die long enough for your market to become hot! "Web scraping" search interest just hit an all-time high. We've been building web data infrastructure at Massive since before this curve went vertical.
Your comment
The search interest curve is telling but the real signal is in adjacent queries. 'Web scraping API' and 'AI data extraction' are growing 3x faster than the base term. Market isn't just growing — it's fragmenting.
Open post ↗
#45 austin-chia score 6.5 add data value ?/10
View original post
The era of writing regex to parse logs is ending. Good. Log files have always been a goldmine of operational truth. Think errors, anomalies, root causes, cascading failures. For years, accessing that truth required either deep technical expertise or a very specific set of tools. That kept a lot of insight locked away. LLMs flip the access problem. Instead of writing brittle parsing rules, you prompt. - "Find unusual patterns in this log." - "Give me a root cause summary." - "Convert this into structured JSON." The model does the semantic heavy lifting while you do the thinking. What strikes me most, reading about these workflows, is how naturally they layer onto existing observability stacks. You can simply slot LLMs in where they add the most value: - Complex summarisation - Anomaly reasoning - Incident documentation This keeps the deterministic tooling doing what it's good at. The hybrid approach is the mature one. And the organisations that figure out that balance early are going to move a lot faster through incidents than those still scrolling through raw logs. 📌 Want real examples? I wrote more in this Splunk article: https://lnkd.in/gQbeJvtw Hi, I'm Austin, a data & AI consultant with experience in automation workflows. Want to work with LLMs like this for your team? DM me and let's talk it through.
Your comment
Ran a similar experiment on our infrastructure logs last quarter. LLM caught a cascading failure pattern that our regex rules missed for 3 months straight. The catch — cleanup still took 70% of the time. Parsing is solved, but nobody budgets for the post-parse mess.
Open post ↗
#46 hariprasad20 score 6.5 add data value ?/10
View original post
Claude vs ChatGPT vs Gemini - Which one to choose? Most opinions are wrong. Because the best AI isn’t the smartest one. It’s the right one for your specific use case. Here’s the honest breakdown 👇 Claude wins when: You need deep structured reasoning. Long-context understanding. Safe, controlled outputs. Complex logic that can't afford to hallucinate. Business strategy, risk analysis, compliance writing. Enterprise-grade reliability. ChatGPT wins when: You need a powerful tool ecosystem. Rapid prototyping. Full-stack development support. Built-in browser and Python tools. Multimodal capabilities. Versatility across wildly different tasks. Gemini wins when: You're inside Google’s ecosystem. You need real-time information. Native multimodal AI. Web and Android development. Cloud-native execution. Cross-modal understanding with live data. Final summary: Claude = Deep reasoning + structured thinking + safety. ChatGPT = Tool ecosystem + automation + versatility. Gemini = Multimodal + real-time + Google integration. Picking the wrong tool for the wrong job doesn’t just slow you down. It costs you accuracy, reliability, and trust. The best AI users don’t pick one and defend it. They know which one to reach for — and when. Which one do you use most and why? Drop it below 👇 Follow Hari Prasad Renganathan for more such insights!!
Your comment
Hitting 20K is where the algorithm starts compounding -- engagement rate per post typically jumps 15-20% once you cross that threshold. The real question is what you do with the distribution before it plateaus.
Open post ↗
#47 davidbombal score 6.5 wild card value ?/10
View original post
The amazing Daniel Miessler is back with PAI - this will help you in 2026. YouTube video: https://lnkd.in/dC7cE6rQ #ai #chatgpt #hack #hackers #cybersecurity
Your comment
Regex never dies, it just gets a nicer wrapper. I'll believe the era is truly over when I stop seeing StackOverflow regex threads with 500K views from last month.
Open post ↗
#48 mersudinforbes score 6.5 smart question value ?/10
View original post
When people ask me what they should prioritise for AI search and then I tell them. They cry because it’s not as easy as updating your content or one specific thing. Because affecting what people say about your at scale is more than optimising your website, it’s about how you operate as a business and treat your customers. And that is hard to grasp. People have been so used to the “push SEO lever, get coins out” approach to Search that this is a hard thing to grasp at first.
Your comment
Wild number. What does the feedback loop look like -- is Claude adjusting bids and creatives autonomously, or is there still a human reviewing before changes go live at that spend level?
Open post ↗
#49 Philip Teale score 6.5 add data value ?/10
View original post
RP English speaker talks to AI. Answer: "I understand you." Glaswegian / Geordie talks. Answer: "...What?" I just read a study on the accuracy of AI transcribers. Turns out, if you have a strong UK regional accent, you may be at a disadvantage. Here's what the data actually shows: Even in a peer-reviewed study under controlled conditions: • Amazon Transcribe: 8.9% WER • OpenAI Whisper: 14.8% WER • Zoom / Otter ai: 19.2% WER And that's before you add crosstalk, poor microphones and the kind of multi-accent meeting that's completely normal in a UK financial advice firm. In real-world UK meetings, expect 12–25% word error rates from tools like Zoom, Teams and Otter. That means 1 in 5 words potentially wrong. For a podcast or team standup? Probably fine. For a suitability report? That's a compliance problem. The tools advisers are using to transcribe client meetings (often without thinking too hard about it) were not built for thick Scottish accents, overlapping speakers or financial services jargon. The marketing numbers look clean. The real-world numbers don't. If you're an IFA relying on AI transcription for anything compliance-critical, it's worth asking: Who actually tested this on your client base? (See the full breakdown in the chart below) Find this useful? Repost ♻️ Save 📄 Follow 🔔 for more on AI in financial advice.
Your comment
The accent gap compounds with non-English languages. Tested Whisper on mixed-language calls (English + German) and WER jumped to 25%+. For firms doing cross-border work, that's a compliance problem, not just a convenience one.
Open post ↗
#50 Tim Soulo score 6.5 contrarian take value ?/10
View original post
The biggest lie in content marketing is the "AI content machine" that automatically turns one big piece of content into ten small ones. Last year: ▪️ I got 9M+ impressions on my LinkedIn posts ▪️ Ahrefs Blog got 3M+ unique visitors ▪️ Ahrefs YouTube channel got 3.5M+ views …which hopefully gives me enough credibility to call out bogus content marketing tactics. Enter the problem: There’s this notion that you can take a single long-form piece of content (a sales call or a podcast interview) and use AI to break it down to lots of short-form content like social posts, articles and email newsletters. That works, yes. But the content you get is royal sh*t. That long-form piece of content just doesn’t have enough signal to break it down to lots of smaller pieces that would perform well. You're just spreading one vague idea thinner and thinner across different channels. So at Ahrefs, we do the opposite. We turn SMALL into BIG: 1. Start with a LinkedIn post. Test your idea. Test the way you frame it. See if anyone cares. Read what people say in the comments. 2. If the post resonates, turn it into a full article. Use the comments as your brief. People literally tell you what's missing, what's confusing, what they want more of. 3. Take your five best articles and rework them into a conference talk. Five validated ideas in one presentation - that’s pure signal. 4. Then discuss that presentation on a podcast. New format, same proven ideas, but now with the depth of a real conversation. 5. Take a few presentations and you got yourself a video course. Get the idea? With every step you add more: more signal, more context, more depth. If you want to create meaningful content with AI, you should configure your "content machine" to work from small to big, not the other …
Your comment
Each platform has a different information density tolerance. A 2000-word blog compressed to a LinkedIn carousel loses the nuance. A podcast clip without the 5 min of context before it becomes clickbait. Platform-native thinking beats reformatting.
Open post ↗
#51 Bastien Siebman score 6.4 share experience value 7/10
View original post
I've been thinking about something lately. When Julien and I founded i.DO - Asana Solutions Partner, we were two guys. Then two more guys joined the leadership before Prune became the first woman at that level. That pattern made me uncomfortable. After a team seminar, I realized the women on our team might have 𝘀𝗽𝗲𝗰𝗶𝗳𝗶𝗰 𝗲𝘅𝗽𝗲𝗰𝘁𝗮𝘁𝗶𝗼𝗻𝘀 we weren't seeing. Not because we didn't care. Because we literally couldn't see them from where we stood. So we created 𝗪𝗼𝗺𝗲𝗻@𝗶𝗗𝗢. A safe space for the women on our team to connect, share, and speak freely. No founders in the room. No hierarchy watching. The hardest part? It wasn't letting go of control. It was accepting that these conversations might surface needs we would never have identified on our own. And that's exactly what happened. Not some big crisis. No dramatic revelations. Just honest conversations that made the workplace better in ways we couldn't have designed ourselves. The real proof it works? The team is in a genuinely 𝗽𝗼𝘀𝗶𝘁𝗶𝘃𝗲 𝘀𝘁𝗮𝘁𝗲 𝗼𝗳 𝗺𝗶𝗻𝗱. When a safe space doesn't reveal major dysfunction, it means the day to day is actually working. Sometimes the best thing a founder can do is 𝗹𝗲𝗮𝘃𝗲 𝘁𝗵𝗲 𝗿𝗼𝗼𝗺. Have you ever created a space at work where you intentionally weren't invited? #Leadership #Inclusion #CompanyCulture #RemoteWork #WorkManagement
Your comment
Did something similar at a previous company. 14 people, founders stepped out of one retro a quarter. The surprising part: the topics that came up were not gender-specific. They were things nobody felt safe raising with founders in the room. Leaving is the feature, not a side effect.
Open post ↗
#52 Simon Coton score 6.2 share experience value ?/10
View original post
Why your Claude Code Skills sound exactly like ChatGPT 3.5. (And the "copy-paste" mistake ruining your agent's output). I see people installing 100+ skills into Claude Code, hoping it magically builds a business. Then they wonder why the output is generic, forgettable, and breaks half the time. More skills don't make Claude smarter. They make it confused. Here are the 3 traps you're probably falling into (and how to fix them in 15 minutes): 1. The Hoarding Trap You have 50 skills installed. Claude only has a 15k character budget to read them. It's a noisy menu. Fix: Delete until you have 20 curated, specific skills. If you can't explain why you need it, trash it. 2. The Copy-Paste Trap You download a skill from GitHub and expect it to know your brand. It's like hiring a consultant and refusing to brief them. Fix: Add reference files (brand voice, examples). Tell Claude to read them first. 3. The Cramming Trap You put your process, voice, and rules into one massive skill.md file. This eats up tokens and makes debugging impossible. Fix: Keep skill.md for the step-by-step process ONLY. Put context in separate reference files. P.S. - Good systems take a few days to build. But when they run successfully at 3 AM without complaining, you'll understand why slow and steady beats fast and broken. How many Skills are you guilty of installing without reading them?
Your comment
Went through exactly this. Trimmed from 40+ skills down to 8 and output quality jumped noticeably. Skills aren't additive — each one competes for the model's attention. Fewer, sharper instructions beat a broad toolkit.
Open post ↗
#53 Edgar Gil Cuesta score 5.8 add data value ?/10
View original post
Sometimes, like we say in Mexico, I enjoy being a bit ARGUENDERO but LinkedIn is for sharing valuable insights. Gus Pelogia is hosting an SEO meet-up SearchIRL in Dublin that I highly recommend attending. The three speakers were all great, Katarina Dahlin; She introduced some useful tools, including AlsoAsked https://alsoasked.com/ (please check it out) ). It’s a keyword and content research tool focused on user questions and search intent and not just keywords. While this was an SEO-focused event, the idea applies more broadly: you can use SEO tools to improve your digital content. For example, if you’re a content creator or a brand, you can explore what people are searching for and create short videos that directly answer those questions. There are already tools like TikTok search reports and keyword planners and way more if you know please share here as well come on share your magic!, but this is another great option to add to your toolkit. So imagine having this type of tools can help to create short videos on IG Reels, TikTok, Youtube shorts videos, etc... and develop content that really resonate with your audience in order to attend that demand... your HEROE and very expensive videos (that in my case I not watching because its 1 min....) that you can create for your brand still working but if you create a strategy for this will elevate your full-funnel of content.
Your comment
AlsoAsked is underrated. Used it to map the question graph around a niche B2B topic — found 12 content gaps that traditional keyword tools completely missed. Question-first approach changes how you structure content clusters.
Open post ↗
#54 michel-lieben score 5.8 share experience value ?/10
View original post
We manage > $300k/mo in ad spend via Claude Code. These Skills 2x'd our output without sacrificing quality: They: - execute bulk edits across platforms - upload custom audiences - detect creative fatigue - generate reports. - tweak bids - flag issues ...all from the terminal. Ivan Falco, our Head of Growth, built these based on 300+ hours running ad campaigns for our clients. (The kind of campaigns that provide > 4X ROAS on > $1M spent) Here’s what’s inside: Google Ads: - keyword-analyzer - audits quality scores and uncovers keyword gaps - negative-keywords - reviews search terms and eliminates wasted spend - performance-auditor - compares periods and surfaces what shifted - search-terms - identifies queries burning budget with zero conversions Meta Ads: - audience-builder - converts CRM lists into custom audiences - creative-fatigue-analyzer - catches declining CTR before you notice - fatigue-monitor - flags when your audience is oversaturated - spend-tracker - monitors budget pacing across all campaigns LinkedIn Ads: - audience-builder - generates targeting audiences at scale - bid-optimizer - adjusts bids across campaigns in bulk - bulk-editor - mass edits campaigns, ads, and naming in seconds - creative-builder - produces ad creatives from brand specs Drop the above skills into Claude Code, follow the setup to connect your accounts, and tell it in plain English what you need. It'll read the skills, plug into your ad platform, and execute. Comment "Claude Code for Ads", and I'll share the full skills repo with you 👇 P.S: We also recorded a 1-hour video showcasing an entire ad campaign buildout from scratch with Claude Code. Lmk if you'd like to view it too & I'll share it along with the skills!
Your comment
Built something similar for our own ad ops — managing six-figure monthly spend from the terminal cut reporting time by half. The real unlock was creative fatigue detection. By the time a human spots declining CTR, you have already burned 2-3 days of budget.
Open post ↗
#55 Snigdha Dey score 5.8 add data value ?/10
View original post
How to Answer Case/Optimization Questions in a Programmatic Interview 💡 “Your campaign’s CTR is dropping. What steps would you take?” or “Your pacing is falling behind. How would you troubleshoot it?” If you’ve ever been in a programmatic interview, you’ve probably faced some version of these questions. Here’s the thing: interviewers don’t just want a quick fix; they want to see your thinking process. So how do you tackle these? 👉 Step 1: Break it down (don’t rush!) ▪️Clarify the KPI, campaign type, and platform. ▪️If certain levers are fixed (e.g., budgets can’t be moved, audiences are locked, or geos are fixed), acknowledge that upfront. ▪️Asking the right clarifying questions shows you’re structured, not reactive. 👉 Step 2: Refine with DSP insights ▪️Check the reporting dashboards, charts, or troubleshooters inside the DSP. ▪️Look at filtering insights, where are the bids getting dropped? Is it inventory, targeting, or bid competitiveness? ▪️Confirm if the issue is consistent or just a short-term fluctuation. 👉 Step 3: Diagnose the root cause ▪️CTR drop → check creatives (fatigue, relevance), audience quality, reach size. ▪️Pacing drop → review inventory availability, targeting restrictions, or overly low bids. 👉 Step 4: Suggest optimizations When you present solutions, keep them practical: ▪️For CTR: A/B test creatives, refresh formats, expand audiences, trim low-performing domains/apps, etc. ▪️For pacing: slightly increase frequency (within allowed thresholds), remove restrictive filters, revisit and adjust bids (raise/lower if too restrictive), or expand inventory access, etc. 👉 Step 5: Think aloud & prioritize Walk the interviewer through your logic. Highlight what you’d check first and why. This shows a methodical approach. ⚡ Pro Tip: Before interviews,…
Your comment
One thing missing — always ask about the measurement window. A CTR drop over 24 hours vs 7 days tells a completely different story. Seen candidates jump to optimization fixes when the actual issue was a data lag.
Open post ↗
#56 austin-chia score 5.7 add data value ?/10
View original post
The reason I trust some data and not other data comes down to one question: do I know how it got here? Most people can't answer that. That's because nobody walked them through the upstream process. The data just... appears in the tool, and everyone agrees to believe it. Log aggregation is that upstream process. It's how raw system events from across your entire infrastructure get consolidated into something an analyst can actually work with. Here's a breakdown: - Logs come from everywhere: applications, OS events, network traffic, security systems - Each source has its own format and structure, and nothing is standardised by default - Aggregation involves collecting, parsing, enriching, and storing before anything useful can happen - What gets kept, masked, or discarded is a deliberate design decision, not a given When I work with clients on their data, this is often where the trust issues trace back to. Not the tool. Not the analyst. The pipeline nobody fully mapped out. Understanding this layer makes you a better data practitioner, full stop. Read my full article here: https://lnkd.in/gbmwDhTD 📌 If this resonates with something your data right now, DM me. I'd love to help you think it through.
Your comment
That pipeline trust gap is real. We traced a 15% revenue attribution error back to a single parsing rule nobody documented. Took 3 weeks to find. The fix was 2 lines of config.
Open post ↗
#57 Rajeswari Gelasam score 5.7 share experience value ?/10
View original post
🚫 10 Things You Should NEVER Say in a Job Interview (And what to say instead 👇) Your answers don’t just show your skills. They reveal your mindset. Small change in words. Massive change in perception. ❌ “I have no experience.” ✅ “I’ve been learning [skill] and I’m excited to apply it in real projects.” ❌ “I really need this job.” ✅ “This role aligns with my skills and long-term goals. I believe I can add value.” ❌ “I hate my current job.” ✅ “I’m looking for new challenges and opportunities to grow.” ❌ “No, I don’t have any questions.” ✅ “What does success look like in the first 90 days in this role?” 💡 Remember this: Interviews are not about begging for opportunities. They are about showing clarity, confidence, and value. Prepare well. Choose your words carefully. And turn interviews into offers. 📌 Save this before your next interview. 👇 What is the toughest interview question you've faced? 👉 Follow Rajeswari Gelasam for more #InterviewTips #CareerAdvice #JobSearch #CareerGrowth #Freshers #ProfessionalDevelopment #InterviewPreparation #LinkedInTips
Your comment
Sat on the other side of this as a hiring manager. The one that kills candidates most: 'I don't have any questions.' Even a bad question shows curiosity. No questions signals disinterest or anxiety — neither works.
Open post ↗
#58 Valentin Vauchelles score 5.7 share experience value ?/10
View original post
Friday morning in Vilnius, I'm walking on stage at PyCon Lithuania to admit something uncomfortable: I haven't written a line of production code in over a year. And Agely ships faster than ever. My talk is "What It Means to Be a CTO in an AI Startup Today." I'll tell you what actually changed (it's not the agents), and the 4 concepts every CTO needs to internalize before the end of the year. 🔥 AI Day on April 10 is stacked. Piotr Skalski (Roboflow) opens with computer vision for sports. Geoffrey Huntley closes with the line that software development now costs nothing. 📍 Great Hall, 11:30, April 10. Come say hi 👋 #PyConLT #PyConLT2026 #ClaudeCode #AIStartup
Your comment
That admission hits home. The uncomfortable truth — the best technical decision a CTO can make right now is knowing when NOT to build custom. Hardest shift from IC to exec, and AI makes it even more tempting to keep coding.
Open post ↗
#59 Gergely Orosz score 5.7 contrarian take value ?/10
View original post
Just six months ago, David Heinemeier Hansson (the creator of Ruby on Rails and Omarchy) shared on the Lex Fridman podcast how he doesn’t use AI tools to write code, because they are not good enough. Things have changed, a lot: now he works agent-first. I sat down to learn more about his workflow, and his thoughts on the software engineering craft. What I love about David is how he has always been (and still is!) SO passionate about code, beauty, and craftsmanship. • YouTube: https://lnkd.in/ePU_2BH5 • Spotify: https://lnkd.in/e4-iE4Pu • Apple: https://lnkd.in/eRVmTJNQ Brought to you by: • Statsig – ⁠ The unified platform for flags, analytics, experiments, and more. Stop switching between different tools, and have them all in one place. https://lnkd.in/eQkXqjMZ • WorkOS – The infrastructure B2B and AI-native companies use to sell to enterprise. It covers everything enterprise security requires: SSO, SCIM, RBAC, Audit Logs, AI governance, and more. Engineering teams ship it in days. Trusted by 2,000+ fast-growing companies, including OpenAI, Anthropic, Cursor, and Vercel. WorkOS.com • Sonar – The makers of SonarQube, the industry standard for automated code review. See how SonarQube Advanced Security is empowering the Agent Centric Development Cycle (AC/DC) with new capabilities. https://lnkd.in/euxvyT5N Three interesting parts from this podcast: #1 DHH's philosophy on AI has not changed, but the available tools very much have. Autocomplete-style coding assistants were genuinely annoying for experienced developers six months ago. Things changed with the shift from tab-completion to agent harnesses, plus the emergence of powerful models like Opus 4.5 – when agents started producing code which DHH does want to merge with little to no alteration. #2 Beautiful…
Your comment
The shift from 'AI isn't good enough' to agent-first in 6 months tracks. But the interesting bit: craftsman-types like DHH only switched when agents stopped requiring you to lower your standards. The bar wasn't capability, it was taste.
Open post ↗
#60 John Crickett score 5.7 smart question value ?/10
View original post
I want to bring multiple Zig episodes to Coding Chats, and I need your help finding the right guests. Why Zig? It's a low-level systems language that's been gaining interest as a modern alternative to C, as it's been climbing the ranks in the StackOverflow developer survey. It feels like a language worth paying attention to. I'm looking for: 🎙️ People who know Zig well enough to go deep, whether that's core contributors, someone using it in production, or developers who've done the hard work of really learning it. 🔗 Introductions are always welcome if you know someone who fits. 📌 Topic suggestions, what do you actually want to hear about? The tradeoffs, the use cases, the rough edges? The aim is to go beyond the basics and get into the stuff that actually matters. The how, why, what and when of the language, its idioms and its tooling. If that sounds like you, or someone you know, please get in touch via DM or comment below. 🎧 Coding Chats is available on all major podcast platforms and on YouTube: https://lnkd.in/e_rN5M6U Here’s the content on Zig so far: https://lnkd.in/eVdhkYqx
Your comment
Curious how you'd frame Zig for an audience that's never touched systems programming. The C-replacement angle is accurate but 'better toolchain for performance-critical services' might land better with a broader dev crowd.
Open post ↗
#61 Edgar Gil Cuesta score 5.5 share experience value ?/10
View original post
🛠️ Free tool: A few people were asking about the tool I screenshot yesterday for exploring ChatGPT query fan outs, so I have popped it onto Cloudflare Pages for free for you at QueryFan[dot]com 🤓 This tool shows you the fan outs and web searches that are happening for any prompt so you can get an idea of where you need to be visible to appear in retrieval generated AI answers. I have a bigger tool coming to the same domain soon, but this should be fun for now, enjoy!
Your comment
Been building something along these lines — tracking how our brand shows up across ChatGPT, Perplexity, and Gemini responses. First run was humbling. Competitors appeared 3x more often. The fan-out visualization is a nice touch — curious how you handle query variations across models.
Open post ↗
#62 Gergely Orosz score 5.5 smart question value ?/10
View original post
Met the co-creator of MCP, David Soria Parra at AI Engineer Europe on London (a great conference!) MCP is very much alive, found product-market fit especially in larger companies (the larger, the more important it's starting to be), and he's also working on some new, not yet public things related to this space! Even though this was the first time we met in-person, we talked before, when writing the deepdive on what MCP is, and how it works, in The Pragmatic Engineer, here: https://lnkd.in/ebPV5YtE And here's a deepdive on real-world MCP usage: https://lnkd.in/e4T8uYDc
Your comment
Real question on the enterprise adoption angle — are you seeing MCP mostly replacing existing API integrations, or creating net-new workflows that weren't possible before? From what we've tested, the second category is where it gets interesting.
Open post ↗
#63 forrestpknight score 5.4 contrarian take value ?/10
View original post
It amazes me how many people think your Average Joe is just going to whip up personal software with AI whenever they feel like it. Not gonna happen. AI lowers the barrier to coding, but it doesn't eliminate: - product thinking - debugging - architecture decision - deployment - iteration - maintenance Most people don't want to maintain software. They want finished tools.
Your comment
Half agree. Built 3 functioning apps with Claude Code and zero dev background — calorie tracker, LLM visibility tool, iOS app. They work. But maintenance is where it falls apart. The building is the easy part now. Keeping it running is still a full-time skill.
Open post ↗
#64 sarahransommckenna score 5.1 smart question value ?/10
View original post
At Sequentum, we’re committed to thoughtful innovation, not chasing trends. We don’t add AI features unless they deliver real value and are designed with intention, not as gimmicks. Sequentum Cloud gives you the benefit of 17+ years of refinement. And the confidence that every AI feature, including the AI Magic Wand, is added only when it truly improves your workflow.
Your comment
Curious about the AI Magic Wand in practice — does it handle anti-bot detection logic, or is it more on the selector/config side? Seen a lot of tools bolt on AI features that look good in demos but break on sites with aggressive fingerprinting.
Open post ↗