Artificial Intelligence was developed very rapidly and was being deployed successfully at NASA. In the year 1999, Remote Agent took Deep Space 1 on a galactic ride. In May 1999, an AI program called Remote Agent autonomously ran the spacecraft Deep Space 1 for two days. The remote agent system was supposed to manage this spacecraft. If a fault happens, this remote agent system will understand that there is a fault and it will correct it. This was based on AI.
The objective of the remote agent was to identify any bugs and then correct them. Model-based reasoning algorithms, goal-directed planning, and execution algorithms, and a fault-operational fault-protection method were used by the remote agent.
Characteristics of the Remote Agent Architecture:
- It is highly programmable through a set of declarative and compositional models. This is known as model-based programming.
- It performs significant amounts of onboard deduction and searches at time resolutions varying from hours to hundreds of milliseconds.
- It is designed to give high-level closed-loop commanding.
- Working of the remote agent:
The remote agent was made up of three components:
- Smart Executive to carry out the planned activities.
- Planner and Scheduler to produce adjustable plans, specifying the fundamental activities that must take place to achieve the mission objectives.
- Mode Identification and Recovery to observe the health of the spacecraft and attempts to correct any issues that occur.
- Smart Executive requests a plan from Planning and Scheduler.
- A plan is produced by the Planner and Scheduler for a given time period
- The plan is received by the Smart Executive from Planner and Scheduler.
- The Smart Executive fills in the details of the plans and commands spacecraft systems to take the necessary actions.
- The state of the spacecraft is constantly monitored by Mode Identification and Recovery. It detects failures and suggests recovery actions.
- Smart Executive executes the recovery action or requests a new plan from Planner and Scheduler which will take into account the failure.
The remote agents used by space missions will be able to explore the areas that are unreachable by our communication system, more reliable, less expensive, and can run many essential activities of the spacecraft. The challenge of designing a remote agent to assist in constructing a virtual presence in space has proved to be an interesting and unique opportunity for Artificial Intelligence. The components of the remote agent draw upon research in a variety of multiple areas of AI. including search, temporal reasoning, constraint propagation, planning and scheduling, reactive languages, plan execution, maintenance, deduction, and model-based diagnosis.
Autonomous Cars
An autonomous car is a vehicle capable of recognizing its environment and operating with no human involvement. These cars integrate a variety of sensors to recognize their environments, such as lidar, radar, GPS, sonar, and initial units of measurements.
Darpa Grand Challenge in 2005
The Grand Challenge was launched by the Defense Advanced Research Projects Agency to stimulate innovation in unmanned ground vehicle navigation. The Darpa Grand Challenge was the first working demonstration of a long trip taken by a self-driven car. Four autonomous cars including Stanley successfully completed a 132-mile desert route under the required limit of ten hours. DARPA created this challenge was 132 miles (212 kilometers approximately) of the Navajo desert in Nevada was to be traveled by these driverless cars with no manual control. Stanford Racing Team from California won the competition for the car Stanley.
Stanley: The Autonomous Car that Won the DARPA Grand Challenge in 2005
The principal technological challenge in the development of Stanley was to design a highly reliable system, capable of driving at remarkably high speeds through diverse and unstructured off-road environments, and to do all this with high precision. These requirements led to various advances in the field of autonomous navigation.
Several methods were developed, and existing methods extended, in the areas of long-range terrain perception, real-time collision avoidance, and stable vehicle control on slippery and rugged terrain. Many of these developments were driven by the requirements of speed, which rendered many traditional techniques in the off-road driving field unsuitable. In pursuing these developments, the team brought to bear algorithms from various fields including distributed systems, machine learning, and probabilistic robotics.
An important feature of Stanley’s design was to retain street-legality so that a human driver could safely operate the robot as a conventional passenger car. Stanley’s custom user interface enabled a driver to engage and disengage the computer system at will, even while the car is in motion. As a result, the driver can disable computer control at any time of the development, and regain manual control of the vehicle. The pervasive utilization of machine learning, both ahead and during the race, made Stanley precise and robust.
Darpa Urban Challenge in 2007
The Urban Challenge was basically about intelligent software. Driverless cars had to drive streets and roads in the city where there were people present in less than six hours. All traffic rules and regulations had to be obeyed and the competitors were divided into two tracks. The autonomous cars were required to display intelligent capabilities like obeying traffic rules while driving along roads, clearing intersections, uniting with moving traffic, parking, and autonomously handling abnormal circumstances. The car Boss for the team Tartan Racing won the competition and claimed a $2 million prize. This was the first time when driverless cars interacted with both manned and unmanned vehicle traffic in an urban area.