Currently, technological innovations of the Fourth Industrial Revolution, including AI, are emerging one after another. However, the markets created by these innovations are still in the process of forming, and the companies that will survive there have not yet been determined. Even leading companies within the hype cycle could suddenly vanish from the market. No one can foresee the future, and no objective data exists to predict it.

Thirty years ago, when the internet became commercially available, we faced a similar situation. While Japanese companies, intent on investigating objective facts and making decisions based on them, remained paralyzed, American companies like Google and Amazon aggressively entered uncharted territory and established overwhelming dominance. Are we about to repeat the same mistake? What does an organization need to make decisions in this environment?

The “Concept Reserch Method” is a technique for exploring business opportunities related to emerging technologies.

In 1994, Kunihiro Tada argued to fellow consultants and client companies that “the Internet will become a major industry,” but no one listened. He agonized over how to share what he saw so clearly with others. As long as judgments were based on “facts,” the Internet was still in its infancy at that point, offering no objective evidence to persuade anyone.

Therefore, he coined the term “concept research” to explain how to analyze information. In other words, inquiry is not just about investigating “facts”; it is crucial to recognize the “concepts” we overlook. Humans seem to perceive “facts,” but in reality, they are not. Instead, they ‘project’ onto the external world the “assumptions” ingrained at the subconscious level, and that is what they see. This represents a profound epistemology shared by figures from Shakyamuni to Nagarjuna, Vasubandhu and Asanga, Kant, Husserl, Jung, and others. Yet, even when explaining this, it rarely sounds like something most people find particularly useful in the real world. (Though recognizing this is actually the shortcut.)

So he turned the attention to the world of pattern recognition technology. Among the pioneering research of the 1960s that led to today’s “machine learning” was the “ugly duckling theorem.” This states that “From a purely logical standpoint, all pairs of objects possess the same degree of similarity. To escape this theorem, one must acknowledge that certain atributes are more important than others.” In other words, this mathematically expresses what numerous sages, including Shakyamuni Buddha, have stated: “Nothing in this world possesses meaning in and of itself.”

In 1998, he began researching Kohonen’s Self-Organizing Maps (SOM) as a new starting point. When used with an understanding of the Ugly Duckling Theorem, SOM becomes a tool for conceptual exploration. By altering the weights assigned to each dimension of the data, SOM presents various classifications. While cross-charts—frequently used by consulting firms in strategic projects—tend to oversimplify reality, applying SOM here generates strategic maps that appropriately capture the complexity of the real world.

Subsequently, starting in 2003, he also began working on BBN. BBN is a probabilistic network model and a causal network. It is a technology for representing phenomena with high uncertainty. It can learn network structures and parameters from data to create models, or create models based on domain expertise even when data is unavailable. The latter is particularly significant for decision support involving bounded rationality. Furthermore, the constructed models can be used to generate synthetic data. Analyzing this data with SOM thus enables simulation of future “market structures” based on various scenarios.

SOM and BBN are fundamentally quantitative techniques. Traditionally, there was an unbridgeable gap between qualitative and quantitative analysis. However, the situation has changed dramatically in recent years. The emergence of LLM surprised many people, yet most still fail to grasp its true nature. LLM represents language as vectors. This means SOM can be used to link traditional quantitative data with textual information. (In fact, it can be used to integrate multimodal information such as images and audio.)

Modeling text information with SOM enables the digitization of the KJ method. The KJ method is a qualitative information analysis technique devised in Japan during the 1960s. It is named after its creator, cultural anthropologist Jiro Kawakita. A similar approach is known internationally as the Grounded Theory Approach (GTA). These methods share the following common steps:

  1. Field research collects many pieces of information.
  2. Classify pieces of information by similarity. (The KJ method recommends repeating this classification over and over again “until the scales fall from your eyes.”)
  3. Extract common characteristics within groups created by classification.
  4. Explain relationships between groups or characteristics. (The KJ method creates a diagram.)

2 and 3 can be achieved using SOM and statistical testing, while 4 can be realized using BBN. Mindware Research Institute began experiments in 2024 using ChatGPT and Viscovery SOMine to create cognitive maps from text information. As a result, we successfully created and clustered cognitive maps of text information efficiently using a proprietary algorithm combining the Minimum Spanning Tree (MST) with Neural Gas (GNG), a technology in the same family as SOM. Moving forward, we will implement the latter part of the concept research using Linear Gaussian BNN.