Cut Through the Hype: 10 Entrepreneurs Battle-Test DeepSeek R1 | Lanchi Ventures Insights
We’ve all heard about DeepSeek R1 – but what really matters is how innovators are using it. While the tech world continues to debate its theoretical potential, relatively few examples of it being applied to real world scenarios exist.
Recently, Lanchi Ventures gathered ten entrepreneurs from across our portfolio companies who have deployed DeepSeek AI R1 in real-life scenarios, either by integrating open-source models into their products, assisting clients to implement applications, or conducting extensive research into the technology.
Their case studies reveal how open-source models are rewriting business rules, and how you and your business can harness this transformation to yield numerous opportunities.
Real Business Impact: Before vs. After DeepSeek R1
Embodied Intelligence Company:
Our team discovered that expanding the visual input of large reasoning models like DeepSeek R1 can significantly boost their reasoning ability, even surpassing that of the GPT-4o model developed by OpenAI. This breakthrough stems from merging together information from diverse data sources, including images, text, tactile sensing and robotic motion trajectories.
If this so-called multimodal fusion or cross-modal integration can be represented within a unified world model, artificial general intelligence (AGI) with VOA will become a reality and not just a hypothesis. This is the team’s next focus.
Future multimodal fusion research has three key directions:
1. Causal Representation Learning: Currently, models rely on statistical correlations. We are building on this by encoding explicit learning-causal relationships in the representational space.
2. Causal-based Active Intervention: By actively intervening, agents will be able to better understand how environmental changes affect behavior.
3. Integrating Causal Models and Intervention Capabilities into VOA: The ultimate goal will be to enable agents to switch between modalities and generalize in Out-of-Distribution (OOD) environments.
Software as a Service (SaaS) Company:
We mainly serve two sectors of the market: B2MassC (education, automotive, real estate) and B2SMB (intellectual property and financial services).
Following the integration of DeepSeek R1, we were able to leverage huge quantities of unstructured data (call recordings and chat logs) into structured insights, helping our lead-management efficiency to surge tenfold, while doubling or tripling our customer response rates.
DeepSeek R1 also shows great potential in the education market. The scripts it generates are more logically coherent, providing the precise and concise responses our clients require – this efficiency has impressed and even astounded some sales managers. In our business, client perception is everything; DeepSeek R1 helps to deliver that ‘how did they know?’ moment.
Social Product Company:
Our experience with DeepSeek R1 shows that combining AI with human interaction is the winning formula. People still crave real connections – they’ll pay for human-AI hybrid services, but not pure bots.
We believe powerful open-source tools like DeepSeek R1 will democratize game creation. Tomorrow’s innovators will craft engaging games by simply expressing their ideas in plain language – no code required.
However, counterintuitively, current AI responses (user message → long AI reply) appear unnatural. Taking cues from Stanford’s Smallville experiment, we’re building AI that thinks between interactions – like how friends remember shared experiences even when not chatting.
This represents a fundamental shift. While traditional social platforms act as mere message routers, AI-native social apps should ideally host ‘digital souls’ on their servers to deliver truly immersive experiences.
Interactive Gaming Company:
When comparing DeepSeek R1 with Claude 3.5, we observed distinct narrative approaches that defined the response of each AI: DeepSeek R1 prioritizes logical structure in information delivery, while Claude 3.5 emphasizes visual storytelling.
For instance, when describing physical confrontation scenarios, R1 methodically details causal relationships and outcome verification (“did the strike land?”), whereas Claude 3.5 focuses on visual representation.
Integrating DeepSeek R1 allowed us to consolidate functions from five agents into two without compromising performance – in fact, this operation enhanced linguistic precision, helping achieve human-like style, and rule adherence. Most crucially, this reduced latency down to the millisecond-level, complemented by improvements in storytelling, understanding emotions, and managing conversations.
Where Should Entrepreneurs Turn After DeepSeek?
Infrastructure Company A:
Technically proficient teams should be able to replicate the DeepSeek model from previously published material. Those with both technical expertise and funding would be advised to boldly invest in foundation model R&D, while resource-constrained teams should prioritize application-layer innovation.
Having a computer science degree is no longer a prerequisite, even graduates with a liberal arts degree can launch startups that seek to leverage the full power of AI. Future trends are shifting from asset-heavy to intellectual-capital-intensive, algorithm-driven, lean models. Tools like DeepSeek R1 are lowering costs, diminishing the importance of physical assets, and making creativity and execution speed the key factors.
Most importantly, DeepSeek R1’s open-source timing and ease of use has created momentum, which will revitalize the entrepreneurial ecosystem. All great breakthroughs stem from true openness. So, when will we witness truly open large models? The countdown has begun.
Infrastructure Company B:
Key client demands for AI inference services include: concurrency, scalability, cost-effectiveness, and latency optimization.
DeepSeek R1’s rise indicates that a fully open-source foundation model is inevitable, and cost-performance competition will accelerate AI democratization. In this scenario, distilled models should be considered, as many applications don’t require the capabilities of the fully-fledged V3 or R1 models.
The industry is currently in a price war, with many companies even offering application programming interface (API) services at a loss. While this presents a challenge in the short term, forecasts and overseas trends both point to the potential for a healthy market. Once the scale of the market reaches a critical size, it is expected that the world will embrace the era of free AI. In the future, large model costs may decrease tenfold, and most applications will no longer need to build their own models.
Data Services Company:
Our core business is to transform an enterprise’s on-premises data into actionable knowledge via intelligent systems and to embed it into core enterprise workflows.
Historically, three barriers hindered the adoption of AI by enterprises: the challenge of eliminating model hallucinations, the inability to ensure data security and control, and poor performance in reasoning capabilities and explainability.
DeepSeek R1’s release has significantly accelerated industrial AI deployment, effectively eliminating user education costs, particularly for state-owned enterprises and key accounts.
Technical Perspectives on DeepSeek R1
Large Model Company:
DeepSeek’s most striking breakthrough is its ability to follow a pure reinforcement learning (RL) reasoning process, offering a potential pathway for AI to surpass human cognition. Although current RL datasets remain limited in scale, its future growth and development will only be ensured through the dedication of greater computing resources, as hypothesized through scaling law.
This tech boom holds great promise for the future. Until relatively recently, AI development companies lagged behind OpenAI, believing they couldn’t catch up with its pace, but things have changed. Now, we are certain to see greater investment and innovation in the industry, pushing its capabilities further still.
AI Product Company:
Whether DeepSeek is open-sourced isn’t our main concern, its suitability and the significant cost reductions it offers are what really matter. We first assess the model’s capability, then speed, and lastly stability. This year will be a crucial one for Agent development. The key is delivering top performance in critical steps, regardless of the model used. With Agents, the more flexibility you provide, the greater the stability the models should have to handle key tasks and maintain control.
Future application development will focus on three main directions:
1. Reasoning Model: Agents need strong reasoning abilities to advance.
2. Real – time application programming interface (API) and Interaction: efficient, end-to-end systems will be crucial for providing rapid responses and smooth multimodal interactions (e.g., handling interruptions and context – switching in conversations).
3. Operator and Container: Through context engineering optimization, developers can enhance task efficiency without relying on specific models.
Embodied Intelligence Company:
While DeepSeek R1 hasn’t sufficiently advanced to surpass its American counterparts at present, it demonstrates unique advantages in certain contexts and applications:
1. Engineering optimization and human input: DeepSeek R1’s extensive human input and engineering optimizations during the data filtering process are crucial factors to its superior performance, yet are often overlooked.
2. Product-Centric Design: R1 enhances interaction by incorporating user feedback into results, an improvement driven more by product design than model capability.
Undeniably, DeepSeek shows potential to achieve technical breakthroughs in the next one to three years. The first breakthrough will be the integration of the prediction and reinforcement learning (RL) capabilities demonstrated by traditional large models, potentially internalizing long-term reasoning as the model’s “fast thinking”. The second is DeepSeek’s mixture of experts (MOE) technology, expected to greatly reduce large-model application costs sooner than anticipated. Lastly, the third is the future application layer’s reliance on multiple large models rather than a single one, where Agent systems will be presented as products.
Large model companies face a key choice: develop cutting edge models or push for applications. From experience, depending solely on a foundation model presents considerable challenges in being to meet high-end application needs. For example, GPT is near product-standard, but fine-tuning to bridge the “last centimeter” gap can easily set developers back 10-20% of the model’s overall development cost.
Therefore, in the early product design stage, developers should strive to balance the foundation model’s scale, cost, and actual capabilities. In doing so, and neglecting the temptation to blindly chase the model’s limits, can avoid potentially high application costs at later stages or having to start over.