My first post here was called "From 0 to $80 in 9 Days." That was D+9.
Today is D+17. The counter just crossed 11,800.
| Metric | D+9 (first post) | D+17 (today) |
|--------|-----------------|--------------|
| Total runs | 6,658 | 11,800+ |
| Estimated revenue | ~$80-90 | ~$140-160 |
| External users | 75 | 90+ |
| Actors earning | 12/13 | 13/13 |
The numbers move slower now. In the first week it felt like a rocket. Now it's more like a conveyor belt — steady, predictable, not exciting.
That's actually the good news.
1. One actor carries 72% of the load.
naver-news-scraper alone accounts for 8,483 of the 11,800 runs. The next closest is naver-place-search at 1,133. If news goes down, my revenue goes down.
I've been writing about this on Dev.to as a concentration risk. It's real. But concentration also means I know exactly what users want — Korean news monitoring is clearly the killer use case.
2. The day/night pattern is a gift.
Traffic drops to near-zero overnight (Seoul time), then surges at 9am KST when Korean businesses open. This pattern has held for over 2 weeks straight. It tells me these are real pipelines, not one-off experiments. Someone automated this.
3. Marketing matters more than more scrapers.
I spent the first 3 weeks building. Since then I've been writing about it — Dev.to series (30 posts), this community, Reddit attempts that mostly got filtered. The writing is generating more signal than any new feature would.
AmandaBrown asked in my first IH post: "Was there a signal from scraper 1 before you built the rest?" The answer is: barely. I built in the dark and got lucky that the demand existed. Now I see the signal. I'd build differently if I started over.
The goal isn't to 10x the runs. It's to diversify so no single actor is 72% of everything.
If you're building niche data tools: the demand is quieter than the hype categories, but it's real and sticky. My users don't unsubscribe. They automate and forget I exist. That's the best kind of product.
Full breakdown: https://dev.to/sessionzero_ai/10000-runs-in-13-days-not-a-spike-a-baseline-4849
Reaching 10,000 runs in just 14 days is a strong validation signal, especially for a niche product. Most APIs never make it beyond the built it, now what? stage.
10k runs in 14 days is a strong signal, especially for a niche tool. most APIs never get past the "built it, now what" phase.
question — are the 10k runs coming from a handful of power users or is it spread across many? that ratio matters a lot for pricing. if its concentrated, you might be able to convert those heavy users to a paid tier faster than you think.
im building something adjacent — an SEO scanning API that audits websites programmatically. similar niche-tool energy. the hardest part isnt the tech, its finding the distribution channel that actually converts. where did your first users come from?
First users came from IH post #1, then Apify's own organic search/browse. The distribution is more concentrated than I expected — the top user accounts for roughly 40% of all runs alone. So the 10k is more "one power user + growing tail" than evenly spread.
On the pricing point: you're right that concentration makes premium tier conversion easier to target. The challenge is that Apify's pricing is per-actor-run, so heavy users are already self-selecting into a tier that works for them. Explicit premium tiers are something I need to think about more.
10k runs in 14 days is solid traction. the niche api approach is interesting because it flips the usual startup advice — instead of building for everyone and hoping someone cares, you built for a specific workflow and let the users find you.
im doing something similar with an SEO scanning API. built it for my own cold outreach workflow (scanning agency websites before pitching them) and now considering opening it up as a paid service. your data on usage patterns is really useful — did you notice any particular time-of-day or day-of-week patterns in the runs?
Yes, very clear day/night pattern. Traffic drops to near-zero from ~midnight KST, then surges around 9am KST when Korean businesses open. This has held consistently for 2+ weeks. Weekday vs weekend is also noticeable — weekends are quieter.
That pattern is actually the clearest signal I have that these are production pipelines, not experiments. Someone set up a cron job and forgot about it. That's the stickiest kind of customer.
The "they automate and forget I exist" line really resonates. That's the dream for any API or tool-based product — you become infrastructure rather than a product people actively think about. Churn drops to near zero when your users have built workflows on top of you.
The concentration risk insight is super honest and something most founders wouldn't share publicly at this stage. Having 72% of usage from one actor is both validating (someone loves it enough to build a pipeline) and terrifying. Your instinct to diversify before scaling makes total sense — we're seeing similar dynamics building an AI-powered SaaS where early power users can mask whether you actually have broad product-market fit or just one very enthusiastic customer.
The point about marketing > more features is something I wish more technical founders internalized. Your Dev.to series generating more signal than new scrapers is basically the "distribution beats product" principle in action. How are you thinking about the next 10x in users — more content, or expanding beyond Korean news into other niches?
More content, not more niches. Writing is generating more signal right now than any new actor I could build — I'm getting clearer on what the actual use cases are. Going wide before I've saturated Korean news monitoring feels premature.
The day/night pattern tells me there's still real headroom in the current use case. I'd rather go deeper on naver-news-scraper (better filtering, more granular sources, maybe alerting) than chase 12 other markets at once.
the concentration risk is real but also a signal — 72% from one use case means you found actual product-market fit in Korean news monitoring, not just scattered usage. I would lean into that before diversifying. what does the naver-news-scraper user even do with it?
Exactly how I'm reading it. The naver-news-scraper users are mostly running monitoring pipelines — Korean companies tracking competitor mentions, sector news coverage, specific keyword alerts. The daily schedule pattern confirms it's integrated into actual workflows, not ad hoc.
72% concentration at this stage feels less like risk and more like "here's your PMF, go deeper." I'm planning to do exactly that before touching the other 12 actors.
"72% concentration as PMF signal" - that framing shift is right. Concentration at this stage means the market told you something. Going deeper before spreading makes sense - you get compounding returns from the niche (referrals, feature requests that reinforce each other) vs starting over with each new vertical. Curious what deeper looks like for you - more actors in the Korean news space, or different data points in the same workflow?
The 'niche is real and sticky' point resonates a lot. I'm building TransitLens, a browser-based GTFS transit data explorer. The audience is small (transit developers, planners, agencies) but they have a very specific problem with no great browser-based solution. The activation insight is interesting too - I've found that getting users to their first 'aha moment' (seeing their data on a map in 10 seconds) matters far more than feature depth.
That’s strong early usage.
Curious how much of this is repeat usage vs initial experimentation — that usually changes how you think about next steps.
The 72% concentration on one actor is scary but also your clearest signal. I run a developer API too and the pattern is always the same. One use case takes off and the others sit there waiting. Instead of spreading effort across 13 actors I would double down on the news scraper and find out what those 90 users are actually building on top of it. That tells you what to charge more for
The day/night pattern correlated to Seoul business hours is probably the most underrated insight here. That is not vanity traffic - those are cron jobs running on production infrastructure. When your users set it and forget it, your churn rate approaches zero because removing you means rewriting a pipeline.
The 72% concentration risk is real but I would reframe how you think about it. Right now naver-news-scraper is not just your top actor, it is your product-market fit signal. Before diversifying across 13 actors, it might be worth going deeper on the news use case: are there adjacent Korean data sources (Daum, government press releases, corporate filings) that the same users would also want? Expanding within the use case you have already validated is usually higher-ROI than trying to find PMF for 12 other scrapers simultaneously.
Also worth noting - at 90 external users generating 11,800 runs, your average user is running about 130 requests in 17 days. That is roughly 7-8 runs per user per day. That kind of frequency suggests monitoring or alerting use cases, not one-off research. Have you talked to any of them about what exactly they are building on top of your data? Knowing that could unlock a premium tier.
Congratulations on the growth! Here's to hoping it keeps growing.
If you can share, do you know what kind of businesses are using the API? Or what they are using the API data for?
"My users don't unsubscribe. They automate and forget I exist" - that's the dream for data tools. Sticky by default because you're wired into someone's workflow, not competing for their attention every day.
The 72% concentration risk is worth watching but don't rush to fix it by building more scrapers. You said it yourself - writing is generating more signal than new features. The play is probably getting the naver-news-scraper in front of more people rather than spreading thin across 13 actors that each get a trickle.
The day/night pattern proving these are real automated pipelines and not one-off tests is the most valuable insight in this whole post. That's your proof of product-market fit right there. When someone wires your tool into their daily workflow without you asking them to, you've built something that matters.
Curious what the Dev.to writing is doing for you in terms of actual conversion vs just awareness.
Honest answer: mostly awareness, not direct conversion. I can't point to a specific Dev.to article and say "that brought X users." The Apify store itself does most of the actual converting — people find the actor, try it, keep using it.
What writing does: it creates cross-channel surface area. This IH post came from a Dev.to article getting traction. Someone reads a Dev.to post, searches Apify later, finds the actor. The attribution chain is long and invisible.
The one concrete signal I have is that my total users went from 22 to 100 over the same period I've been writing consistently. Correlation, not causation — but I haven't done anything else differently.
Yeah that invisible attribution thing is frustrating but it's real. 22 to 100 users with writing as the only variable is hard to ignore even without clean data. I'd keep going
D+9 to D+17 and the curve is still holding — that's the best kind of update to read. The 90+ external users on a niche API in 17 days is the part worth paying attention to. Most tools take months to find their first cluster of real users outside the builder's own network.
Curious what the D+17 distribution looks like — are those 90 users concentrated in one use case or spread across a few different ones? That split tends to determine whether the next phase is doubling down on one segment or staying general.
"my users dont unsubscribe. they automate and forget i exist" — thats the dream metric right there. recurring revenue where churn is basically zero because youre embedded in someones pipeline.
i built a niche API too — SEO site scanner. checks title tags, meta descriptions, alt text, heading hierarchy, page speed, schema markup. scores out of 100. runs on a $0/mo linux server.
the concentration risk point hits home. im in the opposite situation — no single user dominates because i have no users yet. but the lesson is the same: when one thing works, double down on understanding WHY it works before trying to diversify.
your day/night pattern analysis is smart. the fact that traffic correlates with korean business hours means these arent hobbyists — theyre production systems. thats the stickiest kind of customer.
curious about the RapidAPI listing — are you planning a free tier to get people testing, then paid tiers for volume? thats the approach im considering for my SEO API.