// LLM Content Generation

Production LLM Processing at Surfer Scale

We helped Surfer handle massive content generation workloads with a reliable, cost-optimized LLM pipeline built for scale.

offices

USA, Poland

size

60-200 employees

industry

Martech (SaaS tool)

revenue

$25M+ ARR

// Outcomes

The numbers that matter

  • 300B+

    tokens processed

  • 100k+

    credits sold in 6 months

  • 5 months

    from concept to full product release

01 · Our work has changed how content specialists work. Globally

The Challenge

Surfer, the global market leader in SEO optimization & content tools, has invited our research team to work on its latest solution - Surfer AI. It is a tool that boils down 85% of the article search, writing and optimization process to a few clicks to give copywriters more time to refine the article and rank in Google.

Nowadays, many copywriters use tools like ChatGPT to speed up the article writing process. However, these tools often create content that contains a small amount of interesting information - on top of that, it is often incorrect and completely unoptimized for SEO. This is due to technological limitations, overcoming which was a key challenge in the project.

So, as we mentioned, Surfer AI is an AI writing tool designed to streamline the marketing content creation process by producing high-quality well-researched articles that closely align with target users' intent. In addition, we were responsible for developing the tool, which is called Humanizer.

It is worth mentioning that Surfer were able to integrate its core optimization engine into Surfer AI. Let's highlight that we were in a great position to introduce such a product.

02 · Scope of work

NO TIME TO WASTE

Surfer approached us to share our experience and expertise on the possibilities of current generative technology. Our first task was to find and approach R&D process that would lead to a final product.

Step 1: Consultations - Early consultations and defining the scope given the current and predicted state of technology.

Step 2: PoC R&D - We provided a 1-month PoC of what's possible with current technologies. We used rapid prototyping technologies like Gradio, GPT, OpenAI Embeddings, LangChain, etc. - based on the data provided by Surfer. The client was happy with the results and decided to launch a large-scale project.

Step 3: Production ready - We build a production-ready solution, based on LLM models. Our ML team together with Surfer's frontend and backend specialists focused on creating a release version of the product. A well-evaluated pipeline of more than 30 distinctive ML tasks was created with 3 major technological breakthroughs in NLP released during development. About a week before the planned release, OpenAI released a game-changing GPT4-32k model that we managed to integrate & evaluate to the final solution before launch. Thanks to us, Surfer was the first company in the SEO market with this kind of technology inside.

Step 4: Real-time ML improvements - We were constantly conducting assessments and improving the pipeline adjustments. After release, our team focused on evaluating user and internal feedback. Thanks to that we were super up-to-date and we were able to improve the ML solutions weekly. The AI's satisfaction score has DOUBLED since launch, and we introduced new features like providing custom knowledge, writing points, product reviews, comparison templates & many more.

03 · Challenges

WE LOVIN' IT

Providing the highest quality fact-checked informative content: Our goal was to create content that's not only seo-optimized, but that provides well-researched information for the reader. We had to create over 30-step knowledge gathering and fact-checking pipeline to be sure, that the content we created matches or exceeds the quality of human writers.

Constantly changing SOTA Technology: During development, about every 3 months there was a real breakthrough in terms of state-of-the-art generative models. Surfer put heavy emphasis on being the best in the field and our team was constantly evaluating and improving based on the latest foundational models.

Continuous evaluations: We had to merge challenging evaluations, such as comparing the "quality" of writing, with rapid development cycles. We designed an evaluation process that integrates efficient module-based internal testing using LabelStudio with internal SEO specialists, and production A/B testing complemented by user feedback comparison. This approach allows us to release new pipelines every 1 to 2 weeks.

Shipping bleeding-edge technology with high reliability: We successfully addressed the instability issues that are inherent for LLMs. We designed a robust architecture that ensures fail-proof operation with more than 30 interdependent modules, achieving a level of stability that is ready for production use.

// What they say

They do more than what we ask them to do. Bards.ai team always suggests solutions that we couldn't figure out without them.
Bartlomiej Korpus

Bartlomiej Korpus

CTO @ SurferSEO

// Ready to ship?

Let's build something that delivers numbers like these.

Book a meeting