Where Are All These Areas of AI, NLP, ML, DL, and ANN Going? Part 1

When you learn a new subject, the first thing you need to do is pick up a set of terms and acronyms for it.

To save space, I won’t define each term here. You can easily look it up by following the link:

I warn you that this blog is not meant for NLP or AI experts but for the interested layman.

On a recent trip to Japan, I had a chance to talk with an expert in the field of NLP and artificial intelligence at Nifty Corporation, a subsidiary of Fujitsu, and whose business is providing online service in Japan. The expert’s name is Dr. Kunio Matsui and he is executive manager at Nifty. In the past, he was with Fujitsu’s research labs specializing in NLP.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Dr. Kunio Matsui

Initially, my motive was to find out how hard it is to Japanize a computer system. More specifically, I was curious why it was taking a long time for IBM Watson to Japanize its system. However, as we talked about this subject, our discussion drifted to cover many aspects of artificial intelligence, which was very, very useful for my understanding of the subject. AI is now resurgent and the market is confused, although AI is far from being a new field. It is hard to tell where AI stands now, where it is going, what we can do with the current AI, and what cannot be done. Also, in my view, the terms ML and AI are often used interchangeably, although ML is a subfield of AI and there are many more subfields in it. It is very important to keep your cool when analyzing AI to understand what it is, where it is now, and where it is going.

NLP

Architecture

I wanted to know how NLP interfaces with the AI engine. I prepared two different versions of the interface, as shown in Figure 1, and showed them to Matsui. Real systems are not this simple, but remember that I am approaching everything from a layman’s point of view. If the architecture is like the one on the left side of Figure 1, with a clearly defined interface (in yellow in the figure), all we have to do is to rip the English NLP and replace it with the Japanese one. If this is the case, it seems trivial to support multiple languages, including two-byte languages like Japanese.

In reality, things are not that easy. According to Matsui (he used my example and he did not actually use this figure), a system is architected like the one on the right side of Figure 1. Instead of the well-defined interface shown on the left side, NLP is usually tightly integrated with the engine. When we want to support a different language, it cannot be easily replaced. If we rip an NLP for one language and replace it with one for another language, it cannot fit into the points for the former languages. The new NLP needs more complex integration and takes expertise in the new language and efforts to adjust the integration. This explains why IBM is taking time to Japanize Watson.

 

 

Figure 1: How NLP is integrated with the AI engine. The left one has a clean interface between NLP and the engine. The right one is an architecture that tightly integrates NLP with the engine.

Japanization

I was wondering when a system can accurately be described as Japanized. Matsui, not knowing how IBM does so, speculated as follows. First of all, it is important to have knowledge or cortex in Japanese. All of the original Watson’s knowledge and dictionaries are in English and derived from many places, including Wikipedia. Matsui thought IBM was using the Japanese Wikipedia to Japanize Watson. Second, Q&As in Japanese have been categorized. This means that different types of questions are classified  in certain categories, and their answers can be determined according to categories retrieved. With these two, Watson can perform Q&A in Japanese. So in essence, the Japanese version of Watson is not that much different from DeepQA, which was a major component of the English version of Watson, victor of Jeopardy in 2011.

Present and Future of NLP

Although our conversation concerned the Japanese language, it is also applicable to other languages, like English. The current NLP can process natural languages well in terms of syntax and grammar. When it comes to semantics, it struggles except for  some easy cases. This is mainly because natural languages like Japanese and English are ambiguous. I mean that the same word may mean different things in different contexts. The English word spring can mean one of the four seasons or a curved metal. The only way to understand the true meaning of the sentence using spring is to understand the sentences before and after it. This is called semantics analysis. Matsui concluded that most natural languages suffer from this semantics analysis.

Matsui said that NLP should be approached by two different methods. One is to develop a set of rules applicable to each language. Even though each language is ambiguous, certain things can be well-defined by rules. The rest should be taken care of by “brute force” (this is not his word but the author’s). This “brute force” is ML/DL. I think this is similar to an empirical study when you cannot establish a theory of something’s behavior. This approach is very programmatic. In other words, this is an engineering approach rather than a purely scientific one. Being a former engineer himself, this author is in total agreement with this approach. Matsui also said that this is the current trend in the NLP research community.

Continued to Part 2

Zen Kishimoto

About Zen Kishimoto

Seasoned research and technology executive with various functional expertise, including roles in analyst, writer, CTO, VP Engineering, general management, sales, and marketing in diverse high-tech and cleantech industry segments, including software, mobile embedded systems, Web technologies, and networking. Current focus and expertise are in the area of the IT application to energy, such as smart grid, green IT, building/data center energy efficiency, and cloud computing.

, , , , ,

No comments yet.

Leave a Reply


*