Thursday, 1 August 2013

Researchers Find Secret Behind Biological Modularity; Artificial Intelligence Gets a Boost

Artificial Intelligence may be getting a boost. Cornell University engineering and robotics researchers simulated 25,000 generations of evolution within computers and have discovered why biological networks tend to be organized as modules. This new finding could lead to a deeper understanding of the evolution of complexity-and could evolve artificial intelligence to a point where robot brains could function like animal brains.
For some reason, biological entities tend to be organized into modules. For example, brains and gene regulatory systems form into dense clusters of interconnected parts within a complex network. For years, scientists have wondered why exactly organisms that range from humans to bacteria have evolved in this modular fashion. The prevailing assumption was that entities that were modular could respond to change more quickly, and therefore had an evolutionary advantage. However, this theory may not be enough to explain the phenomena's origin.

4th Industrial Revolution Showcased By Worlds Largest Technology Show in Germany

The world’s leading trade fair for industrial technology, HANNOVER MESSE 2013 in Germany, has underscored its role as a key driver of the fourth industrial revolution. “Exhibitors and visitors alike have given this year’s HANNOVER MESSE very high marks, particularly for its keynote theme of “Integrated Industry”, which highlights a sweeping trend towards cross-industry networking and integration,” said Dr. Jochen Köckler, an executive of organizer Deutsche Messe AG.

Tuesday, 30 July 2013

Indian Academic Builds Humanoid Robot with Artificial Intelligence

An Indian academic, Ram Ramamoorthy, has developed a humanoid robot that will play the traditional 'rock-scissor' game using artificial intelligence. It will take on humans and learn the opponent's strategies as it plays.
Dr. Ram Ramamoorthy is from Bangalore and received his degree on Instrumentation and Electronics Engineering from University of Bengaluru. He then completed his PhD from University of Texas, Austin.
He arrived at the University of Edinburgh in 2007, and is now working at the biggest science computer science department in Europe, the School of Informatics.
The two-foot high robots are programmed in such a manner that they respond to gestures made by humans and simultaneously anticipate their actions. They will do this with the help of a Microsoft known as Kinect, which is a motion-sensing device.
Ramamoorthy, who is overseeing the robots' participation in the Science Festival, was quoted in Outlook stating: "These popular little robots are very entertaining to watch and we hope that the Science Festival crowds will enjoy seeing them in action. However, our research has a serious and very useful purpose - we hope to develop machines that are smart enough to work alongside humans, assisting in tasks where people could use a helping hand."
Rock-scissor-paper is a hand game that involves two people, during which the players form one or more shapes with an outstretched hand. The rock beats the scissor and the scissor beats the paper, and the paper beats the rock. If the same shape is produced by both players, it is a tie, reports TOI.
The intelligent robots will play humans in a series of sell-out shows at this year's Edinburg International Science Festival.

How Computers Can Learn Better

Reinforcement learning is a technique, common in computer science, in which a computer system learns how best to solve some problem through trial-and-error. Classic applications of reinforcement learning involve problems as diverse as robot navigation, network administration and automated surveillance.

At the Association for Uncertainty in Artificial Intelligence’s annual conference this summer, researchers from MIT’s Laboratory for Information and Decision Systems (LIDS) and Computer Science and Artificial Intelligence Laboratory will present a new reinforcement-learning algorithm that, for a wide range of problems, allows computer systems to find solutions much more efficiently than previous algorithms did.

The paper also represents the first application of a new programming framework that the researchers developed, which makes it much easier to set up and run reinforcement-learning experiments. Alborz Geramifard, a LIDS postdoc and first author of the new paper, hopes that the software, dubbed RLPy (for reinforcement learning and Python, the programming language it uses), will allow researchers to more efficiently test new algorithms and compare algorithms’ performance on different tasks. It could also be a useful tool for teaching computer-science students about the principles of reinforcement learning.

Geramifard developed RLPy with Robert Klein, a master’s student in MIT’s Department of Aeronautics and Astronautics. RLPy and its source code were both released online in April.

Every reinforcement-learning experiment involves what’s called an agent, which in artificial-intelligence research is often a computer system being trained to perform some task. The agent might be a robot learning to navigate its environment, or a software agent learning how to automatically manage a computer network. The agent has reliable information about the current state of some system: The robot might know where it is in a room, while the network administrator might know which computers in the network are operational and which have shut down. But there’s some information the agent is missing — what obstacles the room contains, for instance, or how computational tasks are divided up among the computers.

Finally, the experiment involves a “reward function,” a quantitative measure of the progress the agent is making on its task. That measure could be positive or negative: The network administrator, for instance, could be rewarded for every failed computer it gets up and running but penalized for every computer that goes down.

The goal of the experiment is for the agent to learn a set of policies that will maximize its reward, given any state of the system. Part of that process is to evaluate each new policy over as many states as possible. But exhaustively canvassing all of the system’s states could be prohibitively time-consuming.

Consider, for instance, the network-administration problem. Suppose that the administrator has observed that in several cases, rebooting just a few computers restored the whole network. Is that a generally applicable solution?

One way to answer that question would be to evaluate every possible failure state of the network. But even for a network of only 20 machines, each of which has only two possible states — working or not — that would mean canvassing a million possibilities. 

Faced with such a combinatorial explosion, a standard approach in reinforcement learning is to try to identify a set of system “features” that approximate a much larger number of states. For instance, it might turn out that when computers 12 and 17 are down, it rarely matters how many other computers have failed: A particular reboot policy will almost always work. The failure of 12 and 17 thus stands in for the failure of 12, 17 and 1; of 12, 17, 1 and 2; of 12, 17 and 2, and so on.

Geramifard — along with Jonathan How, the Richard Cockburn Maclaurin Professor of Aeronautics and Astronautics, Thomas Walsh, a postdoc in How’s lab, and Nicholas Roy, an associate professor of aeronautics and astronautics — developed a new technique for identifying pertinent features in reinforcement-learning tasks. The algorithm first builds a data structure known as a tree — kind of like a family-tree diagram — that represents different combinations of features. In the case of the network problem, the top layer of the tree would be individual machines, the next layer would be combinations of two machines, the third layer would be combinations of three machines, and so on.

The algorithm then begins investigating the tree, determining which combinations of features dictate a policy’s success or failure. The relatively simple key to its efficiency is that when it notices that certain combinations consistently yield the same outcome, it stops exploring them. For instance, if it notices that same policy seems to work whenever machines 12 and 17 have failed, it stops considering combinations that include 12 and 17 and begins looking for others.

Geramifard believes that this approach captures something about how human beings learn to perform new tasks. “If you teach a small child what a horse is, at first it might think that everything with four legs is a horse,” he says. “But when you show it a cow, it learns to look for a different feature — say, horns.” In the same way, Geramifard explains, the new algorithm identifies an initial feature on which to base judgments and then looks for complementary features that can refine the initial judgment.

RLPy allowed the researchers to quickly test their new algorithm against a number of others. “Think of it as like a Lego set,” Geramifard says. “You can snap one module out and snap another one in its place.” 

In particular, RLPy comes with a number of standard modules that represent different machine-learning algorithms; different problems (such as the network-administration problem, some standard control-theory problems that involve balancing pendulums, and some standard surveillance problems); different techniques for modeling the computer system’s environment; and different types of agents.

It also allows anyone familiar with the Python programming language to build new modules. They just have to be able to hook up with existing modules in prescribed ways.

Geramifard and his colleagues found that in computer simulations, their new algorithm evaluated policies more efficiently than its predecessors, arriving at more reliable predictions in one-fifth the time.

RLPy can be used to set up experiments that involve computer simulations, such as those that the MIT researchers evaluated, but it can also be used to set up experiments that collect data from real-world interactions. In one ongoing project, for instance, Geramifard and his colleagues plan to use RLPy to run an experiment involving an autonomous vehicle learning to navigate its environment. In the project’s initial stages, however, he’s using simulations to begin building a battery of reasonably good policies. “While it’s learning, you don’t want to run it into a wall and wreck your equipment,” he says.

Saturday, 20 July 2013

AI Technology



As the name suggests
Artificial intelligence (AI) is technology and a branch of computer science that studies and develops intelligent machines and software. Artificial Intelligence can be also called as the 5th Generation of Computers which are underdevelopment. AI technology is the development of intelligent agents, this  is a system that perceives its environment and takes actions that maximize its chances of success.  John McCarthy, who coined the term in 1955,defines it as "the science and engineering of making intelligent machines".


Artificial Intelligence (AI) techniques, including research into areas such as scheduling and planning, intelligent and expert systems (including decision support), machine learning and agents. As a whole, this research area supports other areas of ICT and beyond; for example, machine learning techniques are widely used in computer vision or speech recognition.


History of artificial intelligence

The field of AI research was founded at a conference on the campus of Dartmouth College in the 1956. Those who attended would become the leaders of AI research for decades. Many of them predicted that a machine as intelligent as a human being would exist in no more than a generation and they were given millions of dollars to make this vision come true. Eventually it became obvious that they had grossly underestimated the difficulty of the project. Seven years later, a visionary initiative by the Japanese Government inspired governments and industry to provide AI with billions of dollars, but by the late 80s the investors became disillusioned and withdrew funding again. This cycle of boom and bust, of "AI winters" and summers, continues to haunt the field. Undaunted, there are those who make extraordinary predictions even now.

In the 1940s and 50s, a handful of scientists from a variety of fields (mathematics, psychology, engineering, economics and political science) began to discuss the possibility of creating an artificial brain. The field of artificial intelligence research was founded as an academic discipline in 1956. This marks the birth of Artificial Intelligence.

Advantages of Artificial Intelligence

We even daily use AI technology to some extend for e.g in voice recognition system, or while playing computer games against computer,  or challenging a computer opponent to chess. Many  companies such as  Electric Sheep Company and other firms has the potential to initiate a new age of  AI.
Novamente claims that its system is the first to allow artificial intelligences to progress through a process of self-analysis and learning [source: Novamente]. The company hopes that its AI will also distinguish itself from other attempts at AI by surprising its creators in its capabilities -- for example, by learning a skill or task that it wasn't programmed to perform. Novamente has already created what it terms an "artificial baby" in the AGISim virtual world [source: Novamente]. This artificial baby has learned to perform some basic functions.
Thus this technology can be used in learning by developing special programed Tutor Robots to teach. We can even develop robots for household works for e.g they can be treated as the servants.