LinkedIn bypassed prompt engineering for its next-generation recommender systems, opting instead for a strategy centered on small, highly refined models, according to Erran Berger, VP of product engineering at LinkedIn. Speaking on the Beyond the Pilot podcast, Berger explained that prompt engineering, a technique involving crafting specific text inputs to guide AI models, was deemed unsuitable for achieving the desired levels of accuracy, latency, and efficiency.
Instead, LinkedIn's AI team developed a detailed product policy document to fine-tune a 7-billion-parameter model, which was subsequently distilled into smaller teacher and student models with hundreds of millions of parameters. This multi-teacher distillation approach proved to be a breakthrough, creating a repeatable process now utilized across LinkedIn's AI product suite.
The company's decision to move away from prompting highlights a growing trend in AI development: the pursuit of specialized, efficient models tailored to specific tasks. While large language models (LLMs) have gained prominence for their versatility, LinkedIn's experience suggests that smaller, fine-tuned models can offer superior performance in certain applications, particularly where speed and precision are paramount.
Berger emphasized the significant quality improvements resulting from this approach. "Adopting this eval process end to end will drive substantial quality improvement of the likes we probably haven't seen in years here at LinkedIn," he stated.
LinkedIn has been developing AI recommender systems for over 15 years, establishing itself as a leader in the field. The company's recommender systems play a crucial role in connecting job seekers with relevant opportunities and helping professionals build their networks. This new approach aims to further enhance the platform's ability to provide personalized and effective recommendations.
The development of these smaller, more efficient models has broader implications for the AI landscape. It suggests that the future of AI may involve a combination of large, general-purpose models and smaller, specialized models working in tandem. This approach could lead to more sustainable and scalable AI solutions, reducing the computational resources required for deployment.
As AI continues to evolve, LinkedIn's experience offers valuable insights into the challenges and opportunities of building next-generation AI systems. The company's focus on efficiency and accuracy underscores the importance of tailoring AI solutions to specific needs, rather than relying solely on generalized models. The repeatable cookbook developed by LinkedIn is now being reused across the company's AI products.
Discussion
Join the conversation
Be the first to comment