Abstract: Most of the Routing algorithms are based on shortest path algorithms or distance vector or link state algorithms they are not capable of adapting the run time changes such as traffic load, delivery time to reach to the destination, etc. thus, though provides shortest path these shortest path may not be optimum path to deliver the packets. Optimum path can only be achieved when state of the network is considered every time the packets are transmitted from the source. Thus, the state of the network depends on a number of network properties like the queue lengths of all the nodes, the condition of all the links and nodes (whether they are up or down) and so on. Thus, Q learning framework of Watkins and Dayan is used to develop and improve such adaptive Routing algorithms. In Q Routing the cost tables are replaced by Q tables and the interpretation, exploration and updating mechanism of these Q tables are modified to make use of the Q learning framework. This improves the Q Routing algorithm further by improving its quality and quantity of exploration. Q learning is based on reinforcement Learning which is a popular machine learning technique which allows an agent to automatically determine the optimal behaviour to achieve a specific goal based on the positive or negative feedbacks it receives from the environment after taking an action. This study describes Q Routing protocols over mobile ad hoc networks. Ad hoc network are wireless network with no infrastructure support. In such a network, each mobile node operates not only as a host but also as a router forwarding packets for other mobile nodes in the network that may not be within the direct reach.
Rahul Desai and B.P. Patil, 2013. MANET with Q Routing Protocol. Asian Journal of Information Technology, 12: 20-26.