The Interface between Humans and AI

SOM and BBN, which Mindware Research Institute has been involved in for many years, will become increasingly important as explainable models in the age of AI. They will be reborn as interfaces that allow humans to monitor AI behavior, understand why the AI ​​made a certain decision, and give appropriate instructions to the AI ​​in a high-dimensional conceptual space.

Build "concepts" from huge pieces of information using Data Science / AI

We provide a completely new business information analysis method for the AI era. Concept Research (=Creative Information Processing) combining LLM(Lerge Language Models), SOM(Self-Organizing Maps) and BBN(Bayesian Belief Network). Applications include: Analysis of Customer Voices at Call Centers, Analysis of Patent Information and Technology Trends, Business Strategy Planning, Case Law Analysis, Accident Analysis, Trouble Information Analysis, Policy Planning, and various Document Management.
AI Governance Solution

The need for AI governance is growing as companies adopt AI. While establishing organizational rules for risk management and maximizing benefits is important, the core issue lies more in the technical aspects. For humans to keep AI under control, we need technology that allows us to understand AI's judgments and provide it with precise instructions.

ThinkNavi

AI Chief of Staff. A system that helps define strategies and concepts through chat with AI. Regardless of the specific LLM model, ConceptMiner's conceptual structure modeling capabilities allow it to form long-term associative memories and support strategic thinking that is aligned with the user's interests and context.

ConceptMiner

Concept Modeling Engine. A set of libraries with tools, such as Growing Neural Gas (GNG) + Minimum Spanning Tree (MST), Self-Organizing Maps (SOM). Users can easily call these functions from their own systems using APIs, and develop applications using text/data mining, AI explainability, and associative memory.