The Eachlabs Index: When the Giants Started Following Our Playbook

The Eachlabs Index: When the Giants Started Following Our Playbook

Hey Eachlabbers, Eftal here. So I've been staring at our August report for the past few days, and honestly, these numbers are wild. Looking at the infographic we put together, that big 57% month-over-month transaction volume growth really jumps out at you. Along with 260 active apps and Image-to-Image is exploding at 419% growth since January. But before I get carried away celebrating, let me tell you what's really going on behind these metrics.

Article content

Rise of the Image to Video with Veo3 and Seedance

Article content

You know how sometimes a technology reaches that tipping point where it stops being experimental and becomes real infrastructure? That's what we saw with VEO3 during August. While Google launched it back in May, August was when developers finally started building production systems with it instead of just playing around.

I remember getting calls from developers who'd been sitting on video projects for months after the May launch, and suddenly, in August, they were ready to ship. It wasn't the launch itself - it was the moment video generation became boring infrastructure instead of cool demo material. That's when you know a technology has made it.

The maturation effect hit our platform hard. Video-related transactions spiked so much in August that it probably contributed significantly to our monthly growth numbers. When ecosystem adoption can move your metrics that much, you realize you're measuring something bigger than individual features.

The Nano Banana Moment

Then Google dropped Nano Banana in late August, and it immediately became the top-rated image editing model worldwide. I won't lie - we weren't expecting the impact to be that immediate. Our Image-to-Image category, which was already growing fast, just exploded.

When you look at our "Fastest growing volume types from Jan to Aug: TOP3" - Image-to-Image at 419%, Text-to-Image at 225%, and Image-to-Video at 201% - you can see exactly when that Nano Banana effect hit our platform.

The timing was perfect, or maybe terrifying, depending on how you look at it. We'd been seeing this steady shift toward visual AI all year, but Nano Banana made it undeniable. Suddenly, every developer conversation was about image editing workflows, character consistency,and  multi-turn editing. The stuff we'd been preaching about composable AI architectures finally clicked for people.

ByteDance's response was swift - they launched Seedream 4.0 in September, directly targeting Nano Banana. That's when I knew we were onto something big. When the giants start launching competitive products within weeks of each other, you're probably sitting on the right infrastructure.

Article content

The Truth About Our "152 Projects Eachlabbing"

Let me be real about this metric. We coined "Eachlabbing" to describe apps using multiple AI models, and yes, 152 projects are doing it. Our report defines it as "using 1+ AI models in the same app" - but that number is kind of meaningless without context.

Some of these "Eachlabbing" projects are just adding a simple image filter to their text app. Others are orchestrating five different models in complex pipelines. We lump them together because, honestly, we're still figuring out how to measure this stuff properly.

The 47 projects doing "10x Eachlabbing" is where things get really interesting - these aren't just using multiple models, they're using more than 20 models each in August alone. Think about that for a second. Twenty different AI models in a single application architecture.

This isn't just high transaction volume - it's genuine architectural sophistication. You can't accidentally orchestrate 20+ models. These are teams building production systems that treat AI models like microservices, composing complex workflows that would have been impossible to imagine two years ago.

What Our 260 "Active Apps" Actually Means

Here's where I need to be honest about our definitions. We count any app with 10+ transactions per month as "active." That 260 number on our report includes everything from serious production systems handling thousands of users to someone's abandoned weekend project that's still generating automated traffic.

If I'm being really honest, probably 80% of our transaction volume comes from maybe 30-40 serious applications. The rest is experimentation, testing, and projects that never quite made it to production. But experimentation matters too - it's how you know a platform is alive.

The Model Curation Challenge

Article content

We tested 126 new models in August and added 64 while delisting 18. Sounds very scientific, right? But here's what that doesn't capture: the dozens of models that never make it past our internal testing phase.

The real selection rate is brutal. Maybe 1 in 5 models we evaluate seriously actually makes it to the platform. And of the ones that do, only a fraction see meaningful usage. But we don't advertise those numbers because, well, they make the ecosystem sound less diverse than our marketing suggests.

The 309 different models "actively used" in the last 6 months includes anything with more than 100 transactions. That's a deliberately low bar because we want to encourage experimentation. But it means our "model diversity" metrics are inflated compared to where the real value concentrates.

What's Actually Happening Here

Despite all my metric complaints, something fundamental is shifting. The August launches - Nano Banana and Seedream 4.0's September response - combined with VEO3's maturation from its May launch, weren't random. They represent the moment when visual AI became infrastructure-ready.

Looking at our volume breakdown, the story becomes clear: Image-to-Image processing now represents 28% of total volume, while Image-to-Video takes up 39%. The remaining 18% goes to Text-to-Image, with Text-to-Video at just 6% and 10% for other model types. This isn't just about growth percentages - it's about systematic workflow adoption.

Article content

The 419% growth in Image-to-Image processing isn't just a big percentage (though percentages from small bases always look impressive). It's that visual processing has become the dominant use case on our platform.

Developers aren't just adding AI features anymore - they're building AI-native applications that assume multimodal capabilities from the start. The composable architecture we've been pushing isn't some future vision; it's how people are building today.

The Real Story

The 57% month-over-month transaction volume growth is real, but it's lumpy and driven by a few major customers and product launches. The 260 active apps include a lot of experiments that won't become businesses. Our model metrics conflate diversity with actual usage patterns.

But underneath all that measurement noise, we're documenting the transition from AI experimentation to AI infrastructure. The developers building on Eachlabs aren't just implementing features - they're constructing production systems that would have been science fiction 18 months ago.

Looking back at our report's clean graphics - that 64 new models ready for Eachlabbing, those 152 projects embracing multi-model architectures - the August developments (Nano Banana's launch, VEO3's maturation into production use, and ByteDance's competitive response) validated everything we've been building toward. Multiple specialized models working in concert isn't a nice-to-have anymore - it's table stakes. And somehow, we ended up with the infrastructure to support it just as the market figured that out.

Sometimes being in the right place at the right time with the right architecture matters more than perfect metrics. The numbers suggest we're building something that works, even when they don't tell the complete story.