Core Concepts
How LocaleCloud Works
LocaleCloud combines the power of AI translation with an efficient edge caching system to deliver fast, high-quality translations for your applications.
Architecture Overview
Your Application
Edge Cache Layer
AI Translation Model
The Translation Process
When you call the t()
function in your code, the following process takes place:
- Cache Check: LocaleCloud first checks if the translation for the given term already exists in the edge cache.
- Cache Hit: If the translation is found in the cache, it is immediately returned to your application, resulting in near-instant performance.
- Cache Miss: If the translation is not in the cache (typically during development or when adding new content), the request is forwarded to the AI translation model.
- AI Translation: The AI model translates the term, taking into account any contextual information provided.
- Cache Update: The new translation is stored in the edge cache for future requests.
- Response: The translated term is returned to your application.
Note: This process is transparent to your application code. You simply call t("Hello")
and LocaleCloud handles the cache checking and translation automatically.
Edge Caching Strategy
LocaleCloud’s edge caching system is designed to optimize performance in production environments:
- Development Mode: During development, you’ll naturally add new content to your application, which triggers AI translations and populates the cache.
- Production Mode: By the time your app is in production, most or all translations are already cached, resulting in minimal latency for your users.
- Global Distribution: The edge cache is distributed globally, ensuring low-latency access for users around the world.
- Automatic Invalidation: When you update a translation or its context in the dashboard, the cache is automatically invalidated and updated with the new translation.
Example: Cache Flow
Here’s a practical example of how the caching works in a real application:
// First encounter of this term (likely in development)
const greeting = t("Hello, world!");
// AI model translates term, caches result
// Returns: "Hola, mundo!" (for Spanish)
// Later usage (development or production)
const greeting = t("Hello, world!");
// Cache hit! Returns "Hola, mundo!" instantly
// No API call or AI processing needed
// The same term in another component
function Footer() {
return <p>{t("Hello, world!")}</p>;
// Same cache is used, no duplicate translations needed
}
// All translations are available in the edge cache
// for production, ensuring fast user experience
AI Translation Model
LocaleCloud uses advanced large language models (LLMs) specialized for translation:
- Context-Aware: The AI understands the context of your content, resulting in more accurate translations compared to traditional word-for-word methods.
- Nuanced Understanding: The model handles idioms, cultural references, and industry-specific terminology appropriately.
- Continuous Improvement: The AI model is regularly updated to improve translation quality based on the latest research.
- Human-in-the-Loop: You can review and refine translations in the dashboard, providing feedback that improves future AI-generated translations.