Be first to read the latest tech news, Industry Leader's Insights, and CIO interviews of medium and large enterprises exclusively from Media & Entertainment Tech Outlook
THANK YOU FOR SUBSCRIBING
Alexander Fleiss, CIO & CEO, Rebellion Research Part and Jeremy Newton, Chief Science Officer & Partner, Rebellion Research Part
There are two main approaches towards creating an Artificial Intelligence technology or any intelligent program.
The first approach to creating Artificial Intelligence was rule-based. One would code into the computer a set of rules or statements that determined what the computer did on various inputs. So, for example, one might argue that proof-reading a paper is an activity which requires a modicum of intelligence, making Grammar Checking software a simple form of AI. Most grammar checking software involves programming into the computer all the rules of English grammar. What types of words can follow intransitive and transitive verbs, the placement of the adjective in relation to the Noun, etc. And now, from this set of rules, the program can take in a sentence or a paragraph or a paper, and accurately detect where there are mistakes. Certain semantics parsers, programs that try to “understand” what a sentence means use such a rule-based approach when trying to break down an English sentence.
Another example would be almost all the chess playing programs. A set of rules is given to the computer to determine how good or bad a certain board position is. The computer is then able to search out through the possible moves in a position, and apply that set of rules to each board position to determine the optimum move to make.
While being extremely effective in most cases, this approach has many drawbacks. First of all, it requires rules to be set. Which means that the programmer needs to have expert knowledge in the subject for creating the rule set. In addition, one must be able to quantify this expert knowledge. So for example, in the chess programs, when grand masters look at the chess board and decide whether their position is good or not, a lot of intuition is used. It is not easy for them to breakdown and properly weigh the individual aspects of the position. Thus, further complicating the creation of the rules.
Also, the rules must be valid both for the present and future. Rule-based semantic parsers are actually not very popular in the field because, for the most part, people do not use proper English grammar! And the grammar rules do not lead to a unique interpretation even when they are followed.
In addition, this approach to creating artificial intelligence is very cumbersome. It is difficult to adapt Rule-based algorithms to other purposes. For each new problem, and new behavior, a complicated set of rules needs to be devised and implemented. And if the underlying parameters of the problem start to change, for example new common usages are adopted into the rules of English grammar, the program needs to be rewritten. Thus these programs written using solely this methodology tend to be narrow in scope, and constantly need refining.
The second approach towards Artificial Intelligence scientists have taken is the machine learning approach.
Machine learning algorithms can build on and use human intuition in finding solutions, as opposed to forcing the human programmer to try and break down his own intuitive decisions
Machine learning is a process through which a program is given a corpus of data, such as historical stock information and returns, and a task, or set of tasks, such as predicting the returns of future stocks. The learning algorithm is considered successful if as it's corpus of data, called it's training data, increases, it's ability to complete each task increases.
How does Machine Learning work?
Every machine learning algorithm depends on 3 things which need to be able to be programmed. First, there needs to exist an experience set, sometimes called a training set. This is data that the algorithm will “learn” from. Next, there needs to be some task, some action that we're trying to make the machine do. For example a task could be playing a game of chess, predicting the outcome of a game, predicting a Recession. And finally there needs to be some performance measure. Some way for the algorithm to be able to differentiate between two different ways of completing a task. In general, a machine learning algorithm attempts to find it's own rules and methods in order to optimize it's performance measure.
Machine learning really started to come into vogue in the 90's as machines became faster, and improved mathematical optimization techniques were developed and refined. These algorithms have proven to be very successful, and have often shown themselves to perform better than the straight rules-based approach.
As a famous example, the chess playing program Deep Blue which challenged then reigning champion Gary Kasparov in 1996 and ended up beating him in 1997 used machine learning in order to play the game. Instead of simply being handed a set of rules on how to value each board position, the computer program was given a large set of board positions which had been evaluated by a group of masters. These masters did not assign a numerical value to the board, but merely indicated whether the position gave an advantage to either side, or whether there was equality on the board. It was then up to the program to decide how to weight the different factors on the board, in order to closely match the masters' evaluations as possible. The result, which was the computer beating a man who is widely considered one of the best players of all time in chess, was impressive to say the least.
Machine learning algorithms can build on and use human intuition in finding solutions, as opposed to forcing the human programmer to try and break down his own intuitive decisions. Deep Blue is the perfect example of this advantage, where instead of forcing chess experts to find and properly weight the factors in a position, the experts were allowed to do what they do best, deciding whether a board position is good or not. The actual weightings of the factors was left up to the machine.
In conclusion, good machine learning algorithms can be used for many purposes, and do not need to be maintained as diligently as ruled-based systems. As their body of experience grows, learning algorithms can modify their rules to take into account the new reality.
Because of these advantages, machine learning has started to take over for rules-based systems, although often a fusion of the two to tackle real world problems. Such as how our firm, Rebellion Research developed our Machine Learning Global Economic and Fundamental monitoring technology. We have the ability to monitor daily data from over 50 countries allowing our technology to predict the American Housing Crisis or the Greek Debt Crisis as well as pick out strong fundamental growth companies whose firms are well placed in front of positive economic momentum.
A.I. bases its analysis on many hand-selected, non-traditional company factors, which correlate with more typical factors like growth, value, macro, etc.
Check Out Review Of CIOReview: Crunchbase, Glassdoor
Check This Out: CIOReview Overview, Muckrack
Read Also
Copyright © 2024 Media and Entertainment Tech Outlook. All rights reserved | Sitemap | About Us