欢迎来到装配图网! | 帮助中心 装配图网zhuangpeitu.com!
装配图网
ImageVerifierCode 换一换
首页 装配图网 > 资源分类 > DOC文档下载
 

人工智能课后习题答案部分已翻译考试.doc

  • 资源ID:12812641       资源大小:223.29KB        全文页数:16页
  • 资源格式: DOC        下载积分:5积分
快捷下载 游客一键下载
会员登录下载
微信登录下载
三方登录下载: 微信开放平台登录 支付宝登录   QQ登录   微博登录  
二维码
微信扫一扫登录
下载资源需要5积分
邮箱/手机:
温馨提示:
用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)
支付方式: 支付宝    微信支付   
验证码:   换一换

 
账号:
密码:
验证码:   换一换
  忘记密码?
    
友情提示
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站资源下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。
5、试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。

人工智能课后习题答案部分已翻译考试.doc

Chapter 11.1 Define in your own word: (a) intelligence, (b) artificial intelligence, (c) agent. Intelligence智能: Dictionary definitions of intelligence talk about “the capacity to acquire and apply knowledge” or “the faculty of thought and reason” or “the ability to comprehend and profit from experience.” These are all reasonable answers, but if we want something quantifiable we would use something like “the ability to apply knowledge in order to perform better in an environment.”智能的字典定义有一种学习或应用知识的能力,一种思考和推理的本领,领会并且得益于经验的能力,这些都是有道理的答案,但如果我们想量化一些东西,我们将用到一些东西像为了在环境中更 好的完成任务使能力适应知识 Artificial intelligence人工智能: We define artificial intelligence as the study and construction of agent programs that perform well in a given environment, for a given agent architecture.作为一学习和构造智能体程序,为了一个智能体结构,在被给的环境中可以很好的完成任务。 Agen 智能体 t: We define an agent as an entity 实体 that takes action in response to percepts from an environment.在一个环境中对一个对象做出反应的实体1.4 There are well-known classes of problem that are intractably difficult for computers, and other classes that are provably undecidable. Does this mean that AI is impossible?No. It means that AI systems should avoid trying to solve intractable problems. Usually, this means they canonly approximate optimal behavior. Notice that humans dont solve NP complete problems either. Sometimes they are good at solving specific instances with a lot of structure, perhaps with the aid of background knowledge. AI systems should attempt to do the same.1.11 “surely computers cannot be intelligent-they can do only what their programmers tell them.” Is the latter statement true, and does it imply the former?This depends on your definition of “intelligent” and “tell.” In one sense computers only do what theprogrammers command them to do, but in another sense what the programmers consciously tells the computer to do often has very little to do with what the computer actually does. Anyone who has written a program with an ornery bug knows this, as does anyone who has written a successful machine learning program. So in one sense Samuel “told” the computer “learn to play checkers better than I do, and then play that way,” but in another sense he told the computer “follow this learning algorithm” and it learned to play. So were left in the situation where you may or may not consider learning to play checkers to be s sign of intelligence (or you may think that learning to play in the right way requires intelligence, but not in this way), and you may think the intelligence resides in the programmer or in the computerChapter 22.1 Define in your own words the following terms: agent, agent function, agent program, rationality, reflex agent, model-based agent, goal-based agent, utility-based agent, learning agent.The following are just some of the many possible definitions that can be written: Agent智能体: an entity(实体) that perceives (感知)and acts行为; or, one that can be viewed as perceiving and acting. Essentially本质上 any object qualifies限定; the key point is the way the objectimplements an agent function. (Note: some authors restrict the term to programs that operate on behalf of a human, or to programs that can cause some or all of their code to run on other machines on a network, as in mobile agents. MOBILE AGENT)一个具有感知和行文的实体,或者是一个可以观察到感觉的实体,本质上,任何限定对象,只要的观点是一种对象执行智能体函数的方法。(注意,一些作者) 可以感知环境,并在环境中行动的某种东西。 Agent function智能体函数: a function that specifies the agents action in response to every possible percept sequence.智能体相应任何感知序列所采取的行动 Agent program智能体程序: that program which, combined with a machine architecture, implements an agent function. In our simple designs, the program takes a new percept on each invocation and returns anaction.实现了智能函数。有各种基本的智能体程序设计,反应出现实表现的一级用于决策过程的信息 种类。设计可能在效率、压缩性和灵活性方面有变化。适当的智能体程序设计取决于环境的本性 Rationality;理性: a property of agents that choose actions that maximize their expected utility, given the percepts to date. Autonomy自主: a property of agents whose behavior is determined by their own experience rather than solely by their initial programming. Reflex agent反射型智能体: an agent whose action depends only on the current percept.一个智能体的行为仅仅依赖于当前的知觉。 Model-based agent基于模型的智能体: an agent whose action is derived directly from an internal model of the current world state that is updated over time.一个智能体的行为直接得自于内在模型的状态,这个状态是当前世界通用的不断更新。 Goal-based agen基于目标的智能体t: an agent that selects actions that it believes will achieve explicitly represented goals.智能体选择它相信能明确达到目标的行动。 Utility-based agen基于效用的智能体t: an agent that selects actions that it believes will maximize the expected utility of the outcome state.试图最大化他们自己期望的快乐 Learning agent学习智能体: an agent whose behavior improves over time based on its experience.2.2 Both the performance measure and the utility function measure how well an agent is doing. Explain the difference between the two.A performance measure(性能度量) is used by an outside observer to evaluate(评估) how successful anagent is. It is a function from histories to a real number. A utility function(效用函数) is used by an agent itself to evaluate how desirable(令人想要) states or histories are. In our framework, the utility functionmay not be the same as the performance measure; furthermore, an agent may have no explicit utility function at all, whereas there is always a performance measure.2.5 For each of following agents, develop a PEAS description of the task environment:a. Robot soccer player;b. Internet book-shopping agent;c. Autonomous Mars rover;d. Mathematicians theorem-proving assistant.Some representative, but not exhaustive, answers are given in Figure S2.1.智能体类型性能度量环境执行器传感器机器人足 球运动员 因特网购赢得比赛, 打败对手 获得请求/感兴趣裁判,自己队伍,其他队伍,自己身体装置(腿)行走踢球 向下连接,输入提相机,触摸传感器 加速器书智能体的书,最小支出因特网交数据,用户显示器网页,用户请求自主火星 漫步者 数学家的定理 证明助手地形探测,汇报, 样本采集分析火星,运行装置, 登陆器轮子/腿,简单手机装置, 分析装置,无线电发射装置相机,触摸传感器 方向传感器2.6 For each of the agent types listed in Exercise 2.5, characterize the environment according to the properties given in Section 2.3,and select a suitable agent design. The following exercises all concern the implementation of environment and agents for the vacuum-cleaner world.Environment properties are given in Figure S2.2. Suitable agent types:a. A model-based reflex agent would suffice for most aspects; for tactical play, a utilitybased agent with lookahead would be useful.基于模型的映射能够满足大多数要求,对于战术游戏,向前效用智能体将会 有用b. A goal-based agent would be appropriate for specific book requests. For more openended taskse.g., “Find me something interesting to read”tradeoffs (权衡折中)are involved(棘手的) and the agent must compare utilities for various(不同的) possible purchases.基于目标的智能体将适当的明确书的请求, 为更多开放的任务,例如查找我有兴趣读的书,智能体必须比较各种可能的购买方式之间的效用c. A model-based reflex agent would suffice for low-level navigation and obstacle avoidance; for routeplanning, exploration planning, experimentation, etc., some combination of goal-based and utility-based agents would be needed.基于模型的映射智能体能够满足低水平的航线和避免障碍,为了路由计划,探 测计划,实验等。这需要基于目标和效用的智能体。d. For specific proof tasks, a goal-based agent is needed. For “exploratory” taskse.g., “Prove some useful lemmata concerning operations on strings”a utility-based architecture might be needed.为了明确的检验任务,需要基于目标的智能体,为探测任务,任务环境机器人足可观察性确定性片段性静态性离散型智能体数球运动员部分随机的连续的动态的连续的多因特网购书智能体部分确定的连续的静态的离散的单自主火星漫步者部分随机的连续的动态的连续的单数学家的定理证明助手完全确定的连续的静态的离散的多Chapter 33.1 Define in your own words the following terms: state, state space, search tree, search node, goal, action, successor function, and branching factor. state :A state is a situation that an agent can find itself in. We distinguish two types of states: world states (the actual concrete situations in the real world) and representational states (the abstract descriptions of the real world that are used by the agent in deliberating about what to do). state space :A state space is a graph whose nodes are the set of all states, and whose links are actions that transform one state into another. search tree :A search tree is a tree (a graph with no undirected loops) in which the root node is the start state and the set of children for each node consists of the states reachable by taking any action. search node :A search node is a node in the search tree. goal :A goal is a state that the agent is trying to reach. action :An action is something that the agent can choose to do. successor function :A successor function described the agents options: given a state, it returns a set of(action, state) pairs, where each state is the state reachable by taking the action. branching factor :The branching factor in a search tree is the number of actions available to the agent.3.7 Give the initial state, goal test, successor function, and cost function for each of the following. Choose a formulation that is precise enough to be implemented.a. You have to color a planar map using only four colors, in such a way that no two adjacent regions have the same color.Initial state: No regions colored.Goal test: All regions colored, and no two adjacent regions have the same color. Successor function: Assign a color to a region.Cost function: Number of assignments.路径耗损b. A 3-foot-tall monkey is in a room where some bananas are suspended from the 8-foot ceiling. He would like to get the bananas. The room contains two stackable, movable, climbable 3-foot-high crates.Initial state: As described in the text.初始状态:Goal test: Monkey has bananas.目标测试:猴子拿到香蕉后继函数: Hop on crate; Hop off crate; Push crate from one spot to another; Walk from one spot to another;grab bananas (if standing on crate). 挪动箱子,把箱子叠起,走到箱子上拿香蕉Cost function: Number of actions. 行动数量c. You have a program that outputs the message “illegal input record” when fed a certain file of input records. You know that processing of each record is independent of the other records. You want to discover what record is illegal.Initial state: considering all input records.Goal test: considering a single record, and it gives “illegal input” message.Successor function: run again on the first half of the records; run again on the second half of the records.Cost function: Number of runs.Note: This is a contingency problem; you need to see whether a run gives an error message or not to decide what to do next.d. You have three jugs, measuring 12 gallons, 8 gallons, and 3 gallons, and a water faucet. You can fill the jugs up or empty them out from one to another or onto the ground. You need to measure out exactly one gallon.Initial state: jugs have values 0, 0, 0.Successor function: given values x, y, z, generate 12, y, z, x, 8, z, x, y, 3 (by filling); 0, y, z, x, 0, z, x, y, 0 (by emptying); or for any two jugs with current values x and y, pour y into x; this changes the jug with x to the minimum of x + y and the capacity of the jug, and decrements the jug with y by the amount gained by the first jug.Cost function: Number of actions.3.8 Consider a state space where the start state is number 1 and the successor function for state n return two states, numbers 2n and 2n+1.a. Draw the portion of the state space for states 1 to 15.See Figure S3.1.b. Suppose the goal state is 11.List the order in which nodes will be visited for breadth-first search, depth-limited search with limit 3, and iterative deepening search.Breadth-first: 1 2 3 4 5 6 7 8 9 10 11Depth-limited: 1 2 4 8 9 5 10 11Iterative deepening: 1; 1 2 3; 1 2 4 5 3 6 7; 1 2 4 8 9 5 10 11c. Would bidirectional search be appropriate for this problem? If so, describe in detail how it would work.Bidirectional search is very useful, because the only successor of n in the reverse direction is (n/2). This helps focus the search.d. What is the branching factor in each direction of the bidirectional search?2 in the forward direction; 1 in the reverse direction.e. Does the answer to (c) suggest a reformulation of the problem that would allow you to solve the problem of getting from state 1 to a given goal state with almost no search?Yes; start at the goal, and apply the single reverse successor action until you reach 1.Chapter 44.2 The heuristic path algorithm is a best-first search in which the objective function is f(n)=(2-w)g(n)+wh(n).For what values of w is this algorithm guaranteed to be optimal?(You may assume that h is admissible.) What kind of search does this perform when w = 0? When w = 1? When w = 2?w = 0 gives f(n) = 2g(n). This behaves exactly like uniform-cost searchthe factor of two makes no difference in the ordering of the nodes. w = 1 gives A search. w = 2 gives f(n) = 2h(n), i.e., greedy best-first search. We also have f(n) = (2 w)g(n) +w2 wh(n)which behaves exactly like A search with aheuristic w2wh(n). For w 1, this is always less than h(n) and hence admissible, provided h(n) is itselfadmissible.4.11 Give the name of the algorithm that results from each of the following special cases:a. Local beam search with k=1.Local beam search with k = 1 is hill-climbing search.b. Local beam search with one initial state and no limit on the number of states retained.Local beam search with k = : strictly speaking, this doesnt make sense. (Exercise may be modified in future printings.) The idea is that if every successor is retained (because k is unbounded), then the search resembles breadth-first search in that it adds one complete layer of nodes before adding the next layer. Starting from one state, the algorithm would be essentially identical to breadth-first search except that each layer is generated all at once.c. Simulated annealing with T = 0 at all times (and omitting the termination test).Simulated annealing with T = 0 at all times: ignoring the fact that the termination step would be triggered immediately, the search would be identical to first-choice hill climbing because every downward successor would be rejected with probability 1. (Exercise may be modified in future printings.)d. Genetic algorithm with population size N = 1.Genetic algorithm with population size N = 1: if the population size is 1, then the two selected parents will be the same individual; crossover yields an exact copy of the individual; then there is a small chance of mutation. Thus, the algorithm executes a random walk in the space of individuals.Chapter 55.1 Define in your own words the terms constraint satisfaction problem, constraint, backtracking search, arc consistency, backjumping and min-conflicts. constraint satisfaction problem :A constraint satisfaction problem is a problem in which the goal is to choose a value for each of a set of variables, in such a way that the values all obey a set of constraints.constraint :A constraint is a restriction on the possible values of two or more variables. For example, a constraint might say that A = a is not allowed in conjunction with B = b. Backtracking search :Backtracking search is a form depth-first search in which there is a single representation of the state that gets updated for each successor, and then must be restored when a dead end is reached. arc consistent :A directed arc from variable A to variable B in a CSP is arc consistent if, for every value in the current domain of A, there is some consistent value of B. Brckjumping :Backjumping is a way of making backtracking search more efficient, by jumping back more than one level when a dead end is reached. Min-conflicts :Min-conflicts is a heuristic for use with local search on CSP problems. The heuristic says that, when given a variable to modify, choose the value that conflicts with the fewest number of other variables.5.2 How many solutions are there for the map-coloring problem in Figure 5.1?There are 18 solutions for coloring Australia with three colors. Start with SA, which can have any of three colors. Then moving clockwise, WA can have either of the other two colors, and everything else is strictly determined; that makes 6 possibilities for the mainland, times 3 for Tasmania yields 18.5.6 Solve the cryptarithmetic problem in Figure 5.2 by hand, using backtracking, forward checking, and the MRV and least-constraining-vague heuristics.a. Choose the X3 variable. Its domain is 0, 1.b. Choose the value 1 for X3. (We cant choose 0; it wouldnt survive forward checking, because it would force F to be 0, and the leading digit of the sum must be non-zero.)c. Choose F, because it has only one remaining value.d. Choose the value 1 for F.e. Now X2 and X1 are tied for minimum remaining values at 2; lets choose X2.f. Either value survives forward checking, lets choose 0 for X2.g. Now X1 has the minimum remaining values.h. Again, arbitrarily choose 0 for the value of X1.i. The variable O must be an even number (because it is the sum of T + T less than 5 (because O + O = R +10 0). That makes it most constrained. j. Arbitrarily choose 4 as the value of O. k. R now has only 1 remaining value.l. Choose the value 8 for R.m. T now has only 1 remaining value.n. Choose the value 7 for T.o. U must be an even number less than 9; choose U.p. The only value for U that survives forward checking is 6.q. The only variable left is W.r. The only value left for W is 3.s. This is a solution.This is a rather easy (under-constrained) puzzle, so it is not surprising that we arrive at a solution with no backtracking (given that we are allowed to use forward checking).Chapter 66.1 This problem exercises the basic concepts of game playing, using tic-tac-toe(noughts and crosses) as an example .We define Xn as the number of rows, columns, or diagonals with exactly n Xs and no Os. Similarly, On is the number of rows, columns, or diagonals with just n Os. The utility function assigns+1 to any position with X3 =1 and -1 to any position with O3 = 1.All other terminal positions have utility 0. For nonterminal positions, we use a linear evaluation function defined as Eval(s) = 3X2 (s) + X1 (s) (3O2 (s) +O1 (s).a. Approximately how many possible games of tic-tac- toe are there?b. Show the whole game tree starting from an empty board down to depth 2 (i.e., one X and one O on the board), taking symmetry into account.c. Mark on your tree the evaluations of all the positions at depth 2.d. Using the minimax algorithm, mark on your tree the backed-up values for the positions at depths 1 and0,and use those values the choose the best starting move.e. Circle the nodes at depth 2 that would not be evaluated if alpha-beta pruning were applied, assuming the nodes are generated in the optimal order for alpha-beta pruning.Figure S6.1 shows the game tree, with the evaluation function values below the terminal nodes and the backed-up values to the right of the non-terminal nodes. The values imply that the best starting move for X is to take the center. The terminal nodes with a bold outline are the ones that do not need to be evaluated, assuming the optimal ordering.6.15 Describe how the minimax and alpha-beta algorithms change for two-players, nonzero-sum games in which each player has his or her own utility function. You may assume that each player knows the others utility function. If there are no constraints on the two terminal utilities, is it possible for any node to be pruned by alpha-beta?The minimax algorithm for non-zero-sum games work

注意事项

本文(人工智能课后习题答案部分已翻译考试.doc)为本站会员(s****u)主动上传,装配图网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知装配图网(点击联系客服),我们立即给予删除!

温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。




关于我们 - 网站声明 - 网站地图 - 资源地图 - 友情链接 - 网站客服 - 联系我们

copyright@ 2023-2025  zhuangpeitu.com 装配图网版权所有   联系电话:18123376007

备案号:ICP2024067431-1 川公网安备51140202000466号


本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。装配图网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知装配图网,我们立即给予删除!