AI4ALL @ Thaicom
Secure enterprise GenAI foundation integrating chat interfaces, identity, retrieval, governance, and model access patterns.
Problem
Enterprise teams wanted useful GenAI access, but needed identity, governance, knowledge retrieval, and repeatable deployment patterns before adoption could scale responsibly.
My role
AI Engineer contributing platform architecture, LLM integration, RAG patterns, authentication, and deployment workflows during a past Thaicom role.
System design
The platform connected chat UX, Microsoft Entra ID access control, model providers, retrieval pipelines, and operational guardrails.
Data / AI approach
Retrieval augmented generation patterns connected approved internal knowledge sources to LLM responses, with prompt and evaluation practices to improve reliability.
Architecture
LibreChat and related LLM interfaces integrated with Bedrock/OpenAI-style model access, Entra ID, RAG services, and cloud deployment patterns.
Impact
AI4ALL provided an enterprise-ready foundation for safer LLM experimentation, internal enablement, and future AI platform extension.
Tech stack
Key learnings
- Enterprise GenAI adoption depends on trust, access control, and evaluation, not only model choice.
- A usable chat interface accelerates feedback from non-technical teams.
- RAG systems need operational ownership for source quality, permissions, and response review.
Expandable details
Governance Pattern+
Identity, permission boundaries, prompt practices, and source controls were treated as first-class platform requirements.
Platform Direction+
The work established reusable patterns for LLM access, internal knowledge retrieval, and AI workflow enablement across teams.