The highest level of all the jobs in the world is human management. If this person cannot learn management, then he has to work under some manager or the other.
Engineers think a little bit, you don’t have to learn management. Even when there was a management course at the university, I raised my nose. But at the end of the day, management is everything.
When Mark Zuckerberg created Facebook, he did the coding. There were very good coders. As the days went by, I had to gradually leave coding or engineering and come to management.
I have written a lot about programming and doing well. A lot of good careers can be made by programming. Now if someone doesn’t like programming, what will he do? He should find things he wants. Working on that good thing, you should find out where you can go. Then the goal was set. The rest is skill development and hard work. If you can go properly, it is beyond imagination where to reach.
I like the example of Sundar Pichai now. An ordinary person, joining Google as an ordinary employee, is now the CEO of Google’s mother company. Such examples have been rare before in The Silkon Valley. Most CEOs were founders. He created a product. The company gradually grew bigger to hit that product. You have to come to the management for the purpose of taking care of the company yourself. But Sundar Pichai joined Google as the manager of a small product. After that, only due to dedication and expertise in work, he continued to get promoted to the above post. Google has thousands of programmers. Everyone’s top-notch. He works under a manager. Programmers must be successful in their respective places. I am writing this article to make non-programmers or non-engineers understand how big is managing people in a company.
Another type of neural network was being worked on long before the advent of deep learning. Its name is a recurrent neural network (recurrent neural network), or RNN for short. These networks are specifically designed to analyze data that has a temporal sequence – such as spoken word segments or written word categories.
A brief description of RNN goes something like this. Let’s say we are talking about a data class that changes over time. The information contained in each time step within that data class is input to this network and output is declared as a result. Also, a part of the output of the last hidden layer is input to the first hidden layer of the next time step net (see figure below). The purpose of such an arrangement is to judge a class of information in the context of its preceding class of information rather than in isolation. It is essential to understand the meaning of spoken words or written sentences.
Another side effect of this is that after a meaningful part of the data class – such as a sentence – is input, the last hidden layer holds a collective impression of this part. Using that impression, the same thing can be said, i.e. translated, in another language – which has a different syntax. This requires another RNN whose first input is the impression of the aforementioned hidden layer, then outputs one word in the second language at each time step until the sentence is complete. These two RNNs, however, must be taught separately with many examples, with the help of many previously translated documents.
An RNN can also be taught what the next word might be by reading many documents. When we type on a computer or mobile, the possible words come before us with the help of an RNN. In this case, the RNN constantly adapts itself from our own writing style, so as to more accurately catch the next word.
That’s just syntax. By analyzing the image with a CNN and inputting its latent layer information into an RNA, a language description of the image can be obtained. This, however, requires RNNs to be taught by example how the image can be described in language from CNN’s latent layer data. But all in all, the result is quite surprising.
The impact of deep learning
The reason neural networks are discussed in so much detail is that deep learning technology has been the mainstay of recent AI breakthroughs. The fact that almost any real relationship between input and output can be learned if the network is deep and detailed enough is the main point of this technique. The use of high-speed graphics processor units makes this learning phase bearable in terms of time. It can be said that deep learning has brought a touch of AI into our daily lives – especially through various mobile apps.
But deep learning doesn’t always solve the problem, traditional AI often relies on logic, reasoning, and exploration. Now a lot of work is being done on conversational AI, that is, the technology of conversation with computers. Neural networks use spoken or written language in a very mechanical way. In that, we can get spoken word to written word, or written word to spoken word, even translation from one language to another language. But the underlying meaning in those words cannot be easily grasped by the computer. It requires reasoning and inquiry into sentence structure, word meaning, etc. So it is quite difficult for computers to have a sustained conversation with humans. Google Home or Amazon’s Echo or chatbots of various websites often get confused if they can’t catch the context of the speech.
The manifestation of human intelligence is not only in seeing and hearing but also in the movements of hands and feet, such as walking, running, sports, etc. Underlying all this is a kind of control ability, which is partly innate and partly learned. It is also a type of intelligence, which is not rational, and operates largely from a subconscious level.
Imparting such capabilities to machines is not easy at all. It is a huge challenge to make a robot move stably and reliably, by estimating the mass of each of the robot’s limbs, calculating how much current should flow through each of its motors, and following the laws of kinematics and control. In this case, robots can acquire these skills through continuous learning rather than relying entirely on formulas. Neural networks and deep learning are very relevant in this regard. But another concept also comes to mind – reinforcement learning (reinforcement learning) .