Está en la página 1de 3

ARTIFICIAL INTELLIGENCE

Embedded Intelligence
By Jeff Stefan Lead Technical Development Engineer OnStar

Artificial intelligence (AI) is a field that failed under the weight of its own unrealistic expectations and prophecies. Despite four decades of research and development, the year 2001 came and went with no sign of HAL. Yet intelligence has had a big impact on computing. To use just one example, spell checkers were considered amazingly intelligent applications when they first emerged. Now theyre ho-hum. The truth is that real AI programming is far from glamorous. It basically boils down to solving a problem that assumes some intelligence by applying efficient search methods to a directed data set. Intelligent methods can be mixed and matched with more traditional procedural methods-no rule says that the techniques are mutually exclusive. Having a working knowledge of proven AI techniques, and knowing when to use them, enables the programmer to add power, elegance, and sophistication to applications.

Data representation Many search spaces are nicely represented by graphs. So it turns out that an effective way to solve a problem requiring intelligence is to put the data in a form easily visualised by a graph. Graphs and graph theory provide visualisation aids, formal rules, and techniques that are useful in making ad hoc decisions. Graphs have their own unique set of terminology. A graph consists of nodes and edges. A node is generally drawn as a circle. An edge is a line between nodes. In software, nodes are usually implemented as data structures with edges as pointers or array indices. Many algorithms exist to traverse graphs, including depthfirst search and breadth-first search. Both depth- and breadthfirst searches are blind searching algorithms. The depth-first algorithm searches from some root node to the farthest leaf node along a series of edges, then on to the next leaf node. A breadthfirst search, on the other hand, examines all of the nodes one edge distant from the root first, then all of the nodes two edges distant, and so on. (If you arent

Figure 1: A road network

familiar with these graph-searching algorithms, you might want to consult an algorithms text like Introduction to Algorithms by Thomas H. Cormen or Computer Algorithms: Introduction to Design and Analysis by Sara Baase.) Algorithms like these are blind because no information is found during the search that can tell us where the proper solution lies. These searches blindly follow their traversal algorithm, possibly searching every node before locating the answer sought. Knowing when to use a blind search is a pleasant dividend gained from adding AI to your bag of tricks. Vehicle navigation Lets say were designing a system that traverses a network of road segments. Maybe its an automated garbage collection system, travelling camera, or automatic traffic re-routing system. Figure 1 is a map of part of San Francisco. The first step in creating a graph to represent this data is to determine what elements are nodes. Road intersections are useful, if not obvious, choices for nodes. With those inserted,

Figure 2: Traversing the graph

the picture of the graph is now partially complete. The result is a goal-less, static representation of the map. The next step is to figure out what additional information the system will need to make intelligent decisions. If the goal of the system is to select the best route for a vehicle to take from one intersection to another, it would be natural to assign weights to the road segments connecting intersections. In the simplest case, there are no one-way streets and all have the same speed limit and number of lanes. (If these things are not true, its relatively painless to extend our graph and weighting model once weve built it.) Assigning weights to the graphs edges can help our system find the best route. These weights are determined somewhat arbitrarily-well assume they represent the average traffic density. Dynamic weights, based on the time of day or local conditions, are a possibility too, but those dont affect the following analysis. In Figure 2, the edge weights are designed to represent the average amount of traffic traversing

eetindia.com | May 2002 | EE Times-India

the road per hour (these statistics arent based on any actual data, but are useful for our discussion). If our vehicle must be routed from the corner of Scott and Jackson (node 5) to the corner of Fillmore and Vallejo (node 17), using the criteria of the least traffic density, our search algorithm should locate the path with the least total weight. Its easy to draw our graph on paper, but we still need to represent it within the computer. The two most common representations for graphs are adjacency matrices and adjacency lists. An adjacency matrix is a static, multi-dimensional array, with the values in the matrix representing the weights of the edges from one node to another. Figure 3 shows a partial adjacency matrix containing the edge weights between nodes 1 through 6 of our example network. The weight of the edge between nodes 1 and 6 is found in the top right corner (and duplicated at bottom left). The full adjacency matrix for the 20-node road network in Figure 2 would consist of 400 elements, only 31 of which would contain valuable information. An adjacency list is usually implemented as a linked list. Figure 4 shows the adjacency lists for nodes 1 to 6 of our network. The edge weights are not shown in the figure, but are easily stored in the data structure. When deciding between an adjacency matrix and an adjacency list, you should consider the following factors: If the graph is dense or small, represent it with an adjacency matrix. The advantage of a matrix is that the weights can be accessed directly, without the overhead of pointer management and list traversal. If the graph is sparse or large, an adjacency list can reduce the amount of wasted memory. If new nodes or edges may be added or deleted at runtime, use an adjacency list. Of course, this also means youll need a dynamic memory capability.

Figure 3: An adjacency matrix

Heuristics If we blindly search the graph following the minimum weight edge out of each node, we may go the wrong way and never wind up at the goal. A more intelligent search can be constructed using heuristics, which are general searching guidelines that work most of the time. Call them rules of thumb. A simple human example is, Its April and cloudy outside, so take an umbrella. Just because its April and cloudy doesnt mean it will rain, but the chances are better than normal under those conditions. Heuristics are an important part of an efficient, effective search strategy. When in doubt, start by taking an initial stab at the problem-even if it might be off base. For the road traversal problem, one heuristic might be, When a node has exits on two (or more) equally weighted edges, perform a breadth-first search. Pursue the path with the least total weight. This helps when node 15 is encountered. Two edges with weight fifteen exit that node. Using breadthfirst search, the next level nodes and their exit edge weights are examined. The next level nodes are 14 and 20. Edges with weights fifteen and forty-five exit from those nodes, respectively. Using the least edge-weight criteria,

node 14 is selected to continue the search, which is reasonable; node 20 is abandoned. Some heuristics may sound obvious, but even these can lend insight into the nature of the problem to be solved. The most basic heuristic for the road traversal problem can be stated, Follow the path with the least edge

wrong, a greedy algorithm would answer, It seemed like a good idea at the time. What seems obvious to a human is not obvious to a machine. Our heuristic, When a node has exits on two (or more) equally weighted edges, perform a breadth-first search. Pursue the path with the least total weight, isnt bad, but it has a flaw. What if the least cost path travels away from the goal node? Then we may be choosing a more expensive path. This brings to light the need to know the direction from the current node to the goal node. Developing heuristics in this fashion is a good way of uncovering the bits of core knowledge required to solve a problem. One way to solve the road network direction problem is to dynamically calculate the distance and bearing between the current node and the goal. This requires latitude and longitude data for each node and floating-point operations every step of the way. This is probably the least optimal solution. A better strategy is to find a set of rules for the differences between pairs of latitudes

Figure 4: An adjacency list

weights. Thats simple and obvious, but it lays down the baseline rule for the search. Methods that follow the least edge weight are called greedy algorithms. Greedy algorithms are based on instant gratification. Without looking ahead, a greedy algorithm chooses the cheapest path to follow in the short term. This doesnt guarantee a solution and, on occasion, a less optimal path will be followed. When asked why it followed a particular path, even though it turned out to be

and longitudes. This helps us distil the knowledge requirements to their minimum set. The first step is to determine the relationship between direction and latitude and longitude. Figure 5 shows direction derived from latitude and longitude differences. When travelling North, latitude increases; when travelling West, longitude increases; and so on. From these simple relationships, we can eliminate a lot of floatingpoint operations at each node.

eetindia.com | May 2002 | EE Times-India

Relating this to the map, the starting latitude and longitude at the intersection of Scott and Jackson is N37 47.514 latitude and W122 26.356 longitude. The goal intersection at Fillmore and Vallejo is N37 47.725 latitude and W122 26.002 longitude. Applying the compass rules from Figure 5, its apparent that the goal intersection is northeast of the starting intersection. Six individual rules are built from the direction relationship: Rule 1: if Lat(goal) > Lat(current) then goal is NorthRule 2: if Lat(goal) < Lat(current) then goal is South Rule 3: if Lat(goal) = Lat(current) then goal is Null Rule 4: if Lon(goal) > Lon(current) then goal is West Rule 5: if Lon(goal) < Lon(current) then goal is East Rule 6: if Lon(goal) = Lon(current) then goal is Null We can combine rules from this basic set to accommodate complex directions, such as Northeast and Southwest. This is done simply by ANDing the basic rules together. The legal rule combinations are: Rule 1 ^ Rule 4 -> goal is Northwest Rule 1 ^ Rule 5 -> goal is Northeast Rule 1 ^ Rule 6 -> goal is North Rule 2 ^ Rule 4 -> goal is Southwest

Rule 2 ^ Rule 5 -> goal is Southeast Rule 2 ^ Rule 6 -> goal is South Rule 3 ^ Rule 4 -> goal is West Rule 3 ^ Rule 5 -> goal is East Rule 3 ^ Rule 6 -> goal is Null Including the basic and complex rules results in a successful search method. If the goal is Northwest of the current node, North and East moves are legal, but South and West moves are not. When the search arrives at node 12, the logical path to take is the edge from node 12 to node 11 with the weight of 15. The direction is North, which appears to be legal, and the edge weight is minimal. This is entirely wrong, since the search steers away from the goal node. Now is when the our rule base should constrain the search. Node 12 is parallel to node 17, so rule 3 is true. The longitude is decreasing, so rule 5 is true. If rule 3 and rule 5 are true, the goal is due East. The rule base has done its job-overriding and guiding the otherwise blind search. This is shown in Figure 6. The blind leading the blind As weve seen, graph representations and blind searching algorithms alone are not sufficient to solve most problems. But combining these techniques with heuristics and sets of problemspecific rules, as we have done here, results in a valuable use of

artificial intelligence. Similar techniques can be applied in a variety of application domains. Although our example concentrated on static data, the techniques shown here are perhaps most useful when edges and edge weights are changing and no hard coding of rules is possible. Obviously, embedded systems suffer some unique constraints. Recursion is usually not allowed in embedded programming, though it is a common technique in graph-searching algorithms. Dynamic memory allocation is generally frowned upon in embedded systems for critical applications, but its difficult to add and delete nodes from linked list data representations without it. In light of these considerations, here are some tips for embedding intelligence: Consider offloading parts of the processing to a more capable system. Perhaps the embedded system need only solve part of the problem on the fly. Avoid recursion. Any recursive function can be rewritten in an iterative function. Keep dynamic memory allocation to a minimum. If a lists size remains fairly constant, use an array instead. Size the array for the maximum size of the list and return failure if theres an attempt to exceed that maximum. Think of intelligence in terms of an insect instead of a supercomputer. Think about how low-level life forms deal with unexpected situations and obstacles. Above all, use combinations of blind, greedy, and intelligent searching algorithms. Theres certainly no restriction that only one method must be used to solve a problem requiring intelligence. AI References Dean, Thomas, et. al. Artificial Intelligence Theory and Practice. Reading, MA: Addison-Wesley, 1995. This is a good text with a wide range of subject coverage.

Figure 6: Rule base violation

As with most AI books, all of the examples are in Lisp. Charniak, Eugene and Drew McDermott. Introduction to Artificial Intelligence. Reading, MA: Addison-Wesley, 1985. Although this is an older book, its filled with practical examples. This is another Lisp-centric text. Bratko, Ivan. Prolog Programming for Artificial Intelligence, Third Edition. Reading, MA: Addison-Wesley, 2000. This book expresses AI concepts and algorithms using PROLOG, another popular AI programming language. The book is constructed in two parts, the first part covers PROLOG programming and the second part covers AI concepts and algorithms. The sections on searching are excellent. Winston, Patrick Henry. Artificial Intelligence, Third Edition. Reading, MA: Addison-Wesley, 1992. This is an AI classic, loaded with detailed pseudocode. Algorithm References Cormen, Thomas H. et. al. Introduction to Algorithms. Cambridge, MA: MIT Press, 1997. This is a definitive reference, providing consistent pseudocode throughout the text. Basse, Sara and Allen Van Gelder. Computer Algorithms, Third Edition. Reading, MA: Addison-Wesley, 2000. This excellent book provides Java and Javalike pseudocode for all of the examples and concepts. Email Send inquiry

Figure 5: Direction, in latitude and longitude terms

eetindia.com | May 2002 | EE Times-India

También podría gustarte