You are on page 1of 5

Rayna Maddox Aryabrata Basu Dylan Sexton Team R.A.D.

October 8, 2012 Midterm Project Abstract: The objective for this midterm project is to create a simulation that uses blackboard based control architecture while also using a fuzzy rule based controller. Background: In many applications of robotics and Artificial Intelligence (AI), there is usage of what is known as fuzzy logic. Fuzzy logic is a type of logic that deals with approximate values rather than its fixed Boolean counterpart. It is used greatly in decision making where choices need to be made based upon input that may not truly fit what could be defined as a certain property or not being that property (like 0 or 1 in Boolean logic). This type of logic is then combined with blackboard control architecture to provide a collaborative pursuit simulation. Blackboard control architectures utilize a common knowledge base, which will be known as the blackboard which is updated frequently to provide a better understanding of a constant changing environment to the program, AI, robot, or for the purposes of this project, a multi-agent system. Approach: We were given the option to do a demonstration or build a robot to properly simulate an environment with a program that utilizes blackboard control architecture while also using fuzzy logic to make decisions in tandem as said environment changes. We then decided that one of the best ways to implement this would be to create a simulation of the popular game of cat and mouse that has the user going through a maze while a multi-agent system would be dispersed through this maze which would actively track the user as it goes through the maze. Software: A simulation environment (a maze) was created for the purposes of this project using the Unity 3D gaming engine.

Construction: In order to create an elaborate cat and mouse scenario, we embarked with the notion of agents chasing a single user. To add a layer of complexity, we cast the entire simulation inside a maze. The concept behind a maze is to implement the perfect game of hide-andseek. With this in mind, we scrambled a bunch of agents seeking a target while patrolling. The patrolling behavior of the agents is achieved using Ray-casting technique. Each

agent has a periphery of vision in which any detection of intrusion is dealt with by damage to the health of the user. The user (i.e. the thief) must not be detected and still be able to traverse the maze. In case the user gets caught by the agents it must elude the agents in order to survive. The agents, on the other hand, should continue to search and take down the thief in case of intrusion detection. Both the agents and the user have states that are being constantly updated via a centralized data structure. The updates correlating to the states of each agent are constantly broadcasted to a globally defined data structure which is being accessed by other agents simultaneously affecting the decision in chasing the user collaboratively. This is how we implemented the blackboard control architecture. This sort of architecture gives us a better understanding of the global state of the system. In this case, it refines the ability of each agent chasing the user. (Discuss Fuzzy-rule based controller)

Code Snippet: Code for reading way-points


// ------------------------------------------/* * Read waypoints */ public void AddWaypoint(Vector3 pos) { m_waypoints.Add(new Vector3(pos.x, 0, pos.z) ); } // ------------------------------------------/* * Will draw the vision of the enemy */ public RaycastHit UpdateVision(Vector3 goalPosition, float viewDistance, float angleView, bool render) { Vector3 originLine = new Vector3(Position.x, Position.y, Position.z); if (render) { // DRAW LINE 1 m_lineRenderer.SetPosition(0, originLine); Vector3 destinationLine1 = new Vector3(Position.x+(viewDistance*Mathf.Cos((Yaw+angleView) * Mathf.Deg2Rad)), Position.y+1,

Position.z+(viewDistance*Mathf.Sin((Yaw+angleView) * Mathf.Deg2Rad))); m_lineRenderer.SetPosition(1, destinationLine1); // DRAW LINE 2 // m_lineRenderer.SetPosition(2, originLine); Vector3 destinationLine2 = new Vector3(Position.x+(viewDistance*Mathf.Cos((Yaw-angleView) * Mathf.Deg2Rad)), Position.y+1, Position.z+(viewDistance*Mathf.Sin((Yaw-angleView) * Mathf.Deg2Rad))); m_lineRenderer.SetPosition(2, destinationLine2); m_lineRenderer.SetPosition(3, originLine); } // LOGIC OF DETECTION OF PLAYER if ((goalPosition.x!=-1)&&(goalPosition.y!=1)&&(goalPosition.z!=-1)) { if (Global.IsInsideCone(this, goalPosition, (float)viewDistance, (float)angleView)) { Ray ray = new Ray(); ray.origin = Position; Vector3 fwd = new Vector3( goalPosition.x Position.x, 0, goalPosition.z - Position.z); fwd.Normalize(); ray.direction = fwd; // ray.origin = Position - (1*fwd); Debug.DrawRay(ray.origin, ray.direction * 100, Color.green); RaycastHit hitCollision = new RaycastHit(); if (Physics.Raycast(ray, out hitCollision, Mathf.Infinity)) { return (hitCollision); } else { return (new RaycastHit()); } } } return (new RaycastHit()); } // -------------------------------------------

/* * Will update the waypoint to go */ public void UpdateWaypoints() { if (m_waypoints.Count>0) { if (m_directionWaypoints) { m_currentWaypoint++; } else { m_currentWaypoint--; } if (m_currentWaypoint<0) m_currentWaypoint = m_waypoints.Count - 1; m_currentWaypoint=m_currentWaypoint%m_waypoints.Count; } }

Code for

Trials and Experimentation Got stuck in ray-casting implementation Had difficulties in generalizing ray-casting technique for dynamic waypoints and path finding. Updating states to the blackboard architecture isnt that easy. Always work with good memory allocation Do not forget to destroy residual agent states, as they tend to accumulate over time and waste memory. Defining the agents periphery of vision was very challenging due to the constant collision with the walls of the maze. Agents tend to lose the thief over time. To be able to enrich ones decision, it should constantly access the blackboard. The Fuzzification of an agents action requires a well-defined fuzzy membership function or else it tends to take atomic decisions. We can extend the search base of each agent by letting them solve the maze and trying to figure out the last known position of the user. We included the ability of the user to take down an agent to observe the behavior of the other agents in the event of one agent dying.

Conclusions and Discussion

You might also like