# Nhập môn AI
---
## PEAS

## Properties of Task environment

• Unobservable
• Multiagent environment: Competitive vs Cooperative
• Deterministic: know the results of actions
• Sequential: A current decision could affect future decisions.
• Static: The environment is unchanged
• Dynamic: The agent is continuously asked what it wants to do
• Semi dynamic: time affect score, but actions.
• Discrete: Step by step like chess.
• Continuous: such as play tennis.
• Known environment: the outcomes (or outcome probabilities if the
environment is stochastic) for all actions are given.
• Unknown environment: the agent needs to learn how it works to
make good decisions.

## Types of agent programs
### Simple reflex agents
• Select actions based on the current percept, ignoring the
rest of the percept history
• Work as if else.
• Work only if the environment is fully observable.
### Model-based reflex agents
• It depends on the percept history and reflects some of the
unobserved aspects,
• Work:
1. How the world evolves independently of the agent
2. How the agent’s actions affect the world
• The agent must keep track of an internal state in partially observable environments.
### Goal-based agents
• Current state of the environment is always not enough
• The agent further needs some sort of **goal information that describes desired situations.**
• Less efficient but more flexible.
### Utility-based agent
• Goals are inadequate to generate high-quality behavior in
most environments.
• An agent’s utility function is essentially an internalization of the performance measure.
1. Goal → success, utility → degree of success (how successful it is)
## Formulating problems by abstraction
### The 8-puzzle
• States: the location of each of the eight tiles and the blank
• Initial state: any state can be designated as the initial state
• Actions: movements of the blank space
- Left, Right, Up, or Down. Different subsets of these are possible
depending on where the blank is
• Transition model: return a resulting state given a state and an
action
• Goal test: check whether the state matches the goal configuration
• Path cost: each step costs 1
## Algorithms
UCS(Uniform-cost search) : same Dijakstra
IDS(Iterative deepening search): DFS with limited height.
GBFS(Greedy best-first search): find path have h which is smallest
A* (graph search): no repeat node
(tree search) : repeat node
Simulated annealing: go fast, decrease the speed when nearly approach the goal.
## CSP

• Variables: X = {WA, NT,Q, NSW, V, SA, T}
• Domains: Di = {red, green, blue}
• Constraints: Adjacent regions must have different colors
C ={SA ≠ WA, SA ≠ NT, SA ≠ Q, SA ≠ NSW , SA ≠ V, WA ≠ NT, NT ≠ Q,Q ≠ NSW , NSW ≠ V}
n variables, domain size d → O(d^n)
### Constraint
**Unary constraint**: restrict the value of a single variable
- E.g., the South Australians do not like green → SA , SA ≠ green
**Binary constraint**: relate two variables
- E.g., adjacent regions are of different colors, SA, WA , SA ≠ WA
**Higher-order constraints**: involve three or more variables
**Global constraints**: involving an arbitrary number of variables
- Alldiff = all variables involved must have different values
**Soft constraints**: which solutions are preferred
- E.g., red is better than green → this can be represented by a cost for each variable assignment
**Constraint optimization problem (COP)**: a combination of
optimization with CSPs → linear programming
**Node consistency**: if all the values in the variable’s domain satisfy the variable’s unary constraints.
**Arc consistency**: if every value in its domain satisfies the variable’s **binary constraints**.
**Minimum-remaining-values** (MRV) heuristic: choose the
variable with the fewest legal values.

**Degree heuristic (DH)**: choose the variable that involves in the largest number of constraints on other unassigned variables.

**Least constraining value** (LCV) heuristic: given a variable, choose the value that leaves the maximum flexibility for subsequent variable assignments.

Forward checking only checks variables that directly connect to the variable being considered.
**Arc consistency is stronger than forward checking.**


## Toán tử

### First Order Logic






## Model
### Decision tree

Example:




## Reference
https://drive.google.com/drive/u/0/folders/1n-sInb4hMPmBMYONSaaTnYLrcZbFQOx-
https://drive.google.com/drive/folders/1oJuyZKSEiC7UIpm8S0cruz0V6eQDCXL_?fbclid=IwAR3lCc2vt-YcRbV5kkor9BH8GTb_CfryIcIMpLKlxoBNKdv6t4KYtNgxEbE