Memorence AI develops AI systems that operate reliably in dynamic environments—where data changes, workflows evolve, and human expertise remains essential.
Our technologies focus on visual intelligence, instant learning, and scalable deployment across industrial and enterprise use cases.
At Memorence AI, our research focuses on building AI systems that function reliably in real operational environments.
Rather than optimizing models only for benchmark performance or offline training results, we design AI architectures that allow systems to learn, adapt, and improve through real-world use.
Our technology is built around the belief that intelligence emerges from experience — not from models alone.
Most AI systems stop learning once they are deployed.
In contrast, Memorence AI is designed to learn during operation.
Our systems support instant learning, allowing expert feedback and operational corrections to immediately influence system behavior during use.
At the same time, they enable continuous learning over time, ensuring that accumulated experience leads to sustained improvement rather than isolated fixes.
By incorporating expert feedback, operational corrections, and real usage signals directly into the learning loop, AI behavior can evolve without requiring full retraining cycles or specialized AI engineering teams.
Through this approach, daily interaction becomes accumulated operational experience, allowing AI systems to respond differently — and more accurately — over time.
Real-world operations are inherently visual.
Our AI systems are built around professional visual data, combining computer vision with multimodal reasoning to support inspection, assembly guidance, troubleshooting, and decision-making.
By integrating visual signals with domain knowledge and contextual information, our systems go beyond simple recognition.
They understand what is happening, why it matters, and how to respond within specific operational contexts.
This vision-centered foundation enables seamless integration between visual inspection, operational assistance, and enterprise knowledge systems.
Effective AI starts with reliable data.
We treat data acquisition as a first-class design principle, not an afterthought.
For each vertical and application, we design optimized data acquisition pipelines — including sensors, optics, capture setups, and workflows — tailored to specific visual and operational requirements.
This ensures that AI systems learn from high-quality, meaningful data collected in real environments, rather than relying on generic datasets that fail to reflect operational reality.
AI systems must operate continuously, not just perform well in demos.
Our architecture is designed for long-term deployment across edge, on-premise, and hybrid environments, supporting stability, scalability, and maintainability over extended operational lifecycles.
This enables enterprises to deploy AI systems that remain controllable, auditable, and adaptable — even as processes, conditions, and requirements evolve over time.
Our research is driven by real operational challenges and focuses on advancing visual intelligence, learning mechanisms, and deployable AI systems for industrial and enterprise environments.
Every technical decision — from learning architecture to deployment design — is guided by practical constraints such as reliability, traceability, data governance, and ease of use for domain experts.
Research outcomes directly inform and strengthen our commercial products and platforms, enabling AI systems to move beyond experimental prototypes and become trusted components of daily operations.
Our core research topics include:
Cognition & Learning Systems
Neocortex-driven learning models, inspired by neocortical memory mechanisms, for long-term knowledge retention and adaptive behavior
Continual and instant learning mechanisms, including zero-shot, one-shot, and few-shot learning
Visual & Vision-Language Intelligence
Computer vision for visual understanding, inspection, and perception
Vision-Language Models (VLM) for integrating visual content with contextual, procedural, and domain-specific knowledge
Natural language processing (NLP) for contextual and procedural knowledge representation
Action & Robotic Systems
Vision-Language-Action (VLA) systems for linking perception and reasoning to actionable operational guidance and task execution
Robotic perception and interaction systems, enabling AI-assisted operation, verification, and collaboration in physical environments
Edge & Embedded AI Systems
On-device model optimization for efficient edge and embedded AI deployment
Visual generation modules for data augmentation, visualization, and AI-assisted content generation
How Sampling Rate Affects Cross-Domain Transfer Learning for Video Description
Yu-Sheng Chou, Pai-Heng Hsiao, Shou-De Lin, Hong-Yuan Mark Liao,
ICASSP 2018: 2651-2655
*joint research with Academia Sinica, Taiwan