Смекни!
smekni.com

Методические рекомендации для самостоятельной работы студентов (стр. 6 из 9)

whereas тогда как; в то время как

whereby тем самым; посредством чего

wherein в чем

wherever где бы ни; куда бы ни whether ли

whether ... or или ... или

while в то время как; пока

with a view to с целью; с намерением

with every good wish с лучшими пожеланиями

within внутри; в пределах

within a factor of ten в пределах одного порядка

within the limits of the power в пределах прав

without без; (так чтобы) не

without question бесспорно

without reservation безоговорочно

with reference to ссылаясь на, относительно; что касается

with regard to с намерением, относительно; с учетом

with respect to по отношению к, относительно

with the exception of за исключением

worth-while заслуживающий внимания

yet однако, до сих пор, еще


Практические задания

PRACTICAL TASKS

Task I. Read the text “Laser lidar” and study the summary to this text.

Laser lidar

Laser-based lidar (light detection and ranging) has also proven to be an important tool for oceanographers. While satellite pictures of the ocean surface provide insight into overall ocean health and hyperspectral imaging provides more insight, lidar is able to penetrate beneath the surface and obtain more specific data, even in murky coastal waters. In addition, lidar is not limited to cloudless skies or daylight hours.

“One of the difficulties of passive satellite-based systems is that there is water-surface reflectance, water-column influence, water chemistry, and also the influence of the bottom”, said Chuck Bostater, director of the remote sensing lab at Florida Tech University (Melbourne, FL). “In shallow waters we want to know the quality of the water and remotely sense the water column without having the signal contaminated by the water column or the bottom”.

A typical lidar system comprises a laser transmitter, receiver telescope, photodetectors, and range-resolving detection electronics. In coastal lidar studies, a 532-nm laser is typically used because it is well absorbed by the constituents in the water and so penetrates deeper in turbid or dirty water (400 to 490 nm penetrates deepest in clear ocean water). The laser transmits a short pulse of light in a specific direction. The light interacts with molecules in the air, and the molecules send a small fraction of the light back to telescope, where it is measured by the photodetectors.

Abstract (Summary)

Laser lidar. “Laser Focus World”, 2003, v 46, №3, p45.

The text focuses on the use of laser-based lidar in oceanography.

The ability of lidar to penetrate into the ocean surface to obtain specific data in murky coastal waters is specially mentioned.

Particular attention is given to the advantage of laser-based lidars over passive satellite-based systems iN obtaining signals not being contaminated by the water column or the bottom.

A typical lidar system is described with emphasis on the way it works.

This information may be of interest to research teams engaged in studying shallow waters.

Task II. Read the texts and write summaries according to given one.

Text 1

Artificial Intelligence at Edinburgh University: a Perspective

Jim Howe

Revised June 2007.

Artificial Intelligence (AI) is an experimental science whose goal is to understand the nature of intelligent thought and action. This goal is shared with a number of longer established subjects such as Philosophy, Psychology and Neuroscience. The essential difference is that AI scientists are committed to computational modelling as a methodology for explicating the interpretative processes which underlie intelligent behaviour, that relate sensing of the environment to action in it. Early workers in the field saw the digital computer as the best device available to support the many cycles of hypothesizing, modelling, simulating and testing involved in research into these interpretative processes. They set about the task of developing a programming technology that would enable the use of digital computers as an experimental tool. Over the first four decades of AI's life, a considerable amount of time and effort was given over to the design and development of new special purpose list programming languages, tools and techniques. While the symbolic programming approach dominated at the outset, other approaches such as non-symbolic neural nets and genetic algorithms have featured strongly, reflecting the fact that computing is merely a means to an end, an experimental tool, albeit a vital one.

The popular view of intelligence is that it is associated with high level problem solving, i.e. people who can play chess, solve mathematical problems, make complex financial decisions, and so on, are regarded as intelligent. What we know now is that intelligence is like an iceberg. A small amount of processing activity relates to high level problem solving, that is the part that we can reason about and introspect, but much of it is devoted to our interaction with the physical environment. Here we are dealing with information from a range of senses, visual, auditory and tactile, and coupling sensing to action, including the use of language, in an appropriate reactive fashion which is not accessible to reasoning and introspection. Using the terms symbolic and sub-symbolic to distinguish these different processing regimes, in the early decades of our work in Edinburgh we subscribed heavily to the view that to make progress towards our goal we would need to understand the nature of the processing at both levels and the relationships between them. For example, some of our work focused primarily on symbolic level tasks, in particular, our work on automated reasoning, expert systems and planning and scheduling systems, some aspects of our work on natural language processing, and some aspects of machine vision, such as object recognition, whereas other work dealt primarily with tasks at the sub-symbolic level, including automated assembly of objects from parts, mobile robots, and machine vision for navigation.

Much of AI's accumulating know-how resulted from work at the symbolic level, modelling mechanisms for performing complex cognitive tasks in restricted domains, for example, diagnosing faults, extracting meaning from utterances, recognising objects in cluttered scenes. But this know-how had value beyond its contribution to the achievement of AI's scientific goal. It could be packaged and made available for use in the work place. This became apparent in the late 1970s and led to an upsurge of interest in applied AI. In the UK, the term Knowledge Based Systems (KBS) was coined for work which integrated AI know-how, methods and techniques with know-how, methods and techniques from other disciplines such as Computer Science and Engineering. This led to the construction of practical applications that replicated expert level decision making or human problem solving, making it more readily available to technical and professional staff in organisations. Today, AI/KBS technology has migrated into a plethora of products of industry and commerce, mostly unbeknown to the users.

History of AI at Edinburgh

The Department of Artificial Intelligence can trace its origins to a small research group established in a flat at 4 Hope Park Square in 1963 by Donald Michie, then Reader in Surgical Science. During the Second World War, through his membership of Max Newman's code-breaking group at Bletchley Park, Michie had been introduced to computing and had come to believe in the possibility of building machines that could think and learn. By the early 1960s, the time appeared to be ripe to embark on this endeavour. Looking back, there are four discernible periods in the development of AI at Edinburgh, each of roughly ten years' duration. The first covers the period from 1963 to the publication of the Lighthill Report by the Science Research Council in l973. During this period, Artificial Intelligence was recognised by the University, first by establishing the Experimental Programming Unit in January 1965 with Michie as Director, and then by the creation of the Department of Machine Intelligence and Perception in October 1966. By then Michie had persuaded Richard Gregory and Christopher Longuet-Higgins, then at Cambridge University and planning to set up a brain research institute, to join forces with him at Edinburgh. Michie's prime interest lay in the elucidation of design principles for the construction of intelligent robots, whereas Gregory and Longuet-Higgins recognized that computational modelling of cognitive processes by machine might offer new theoretical insights into their nature. Indeed, Longuet-Higgins named his research group the Theoretical Section and Gregory called his the Bionics Research Laboratory. During this period there were remarkable achievements in a number of sub-areas of the discipline, including the development of new computational tools and techniques and their application to problems in such areas as assembly robotics and natural language. The POP-2 symbolic programming language which supported much subsequent UK research and teaching in AI was designed and developed by Robin Popplestone and Rod Burstall. It ran on a multi-access interactive computing system, only the second of its kind to be opened in the UK. By 1973, the research in robotics had produced the FREDDY II robot which was capable of assembling objects automatically from a heap of parts. Unfortunately, from the outset of their collaboration these scientific achievements were marred by significant intellectual disagreements about the nature and aims of research in AI and growing disharmony between the founding members of the Department. When Gregory resigned in 1970 to go to Bristol University, the University's reaction was to transform the Department into the School of Artificial Intelligence which was to be run by a Steering Committee. Its three research groups (Jim Howe had taken over responsibility for leading Gregory's group when he left) were given departmental status; the Bionics Research Laboratory's name was retained, whereas the Experimental Programming Unit became the Department of Machine Intelligence, and (much to the disgust of some local psychologists) the Theoretical Section was renamed the Theoretical Psychology Unit! At that time, the Faculty's Metamathematics Unit, which had been set up by Bernard Meltzer to pursue research in automated reasoning, joined the School as the Department of Computational Logic. Unfortunately, the high level of discord between the senior members of the School had become known to its main sponsors, the Science Research Council. Its reaction was to invite Sir James Lighthill to review the field. His report was published early in 1973. Although it supported AI research related to automation and to computer simulation of neurophysiological and psychological processes, it was highly critical of basic research in foundational areas such as robotics and language processing. Lighthill's report provoked a massive loss of confidence in AI by the academic establishment in the UK (and to a lesser extent in the US). It persisted for a decade - the so-called "AI Winter".

Since the new School structure had failed to reduce tensions between senior staff, the second ten year period began with an internal review of AI by a Committee appointed by the University Court. Under the chairmanship of Professor Norman Feather, it consulted widely, both inside and outside the University. Reporting in 1974, it recommended the retention of a research activity in AI but proposed significant organizational changes. The School structure was scrapped in favour of a single department, now named the Department of Artificial Intelligence; a separate unit, the Machine Intelligence Research Unit, was set up to accommodate Michie's work, and Longuet-Higgins opted to leave Edinburgh for Sussex University. The new Department's first head was Meltzer who retired in 1977 and was replaced by Howe who led it until 1996. Over the next decade, the Department's research was dominated by work on automated reasoning, cognitive modelling, children's learning and computation theory (until 1979 when Rod Burstall and Gordon Plotkin left to join the Theory Group in Computer Science). Some outstanding achievements included the design and development of the Edinburgh Prolog programming language by David Warren which strongly influenced the Japanese Government's Fifth Generation Computing Project in the 1980s, Alan Bundy's demonstrations of the utility of meta-level reasoning to control the search for solutions to maths problems, and Howe's successful development of computer based learning environments for a range of primary and secondary school subjects, working with both normal and handicapped children.

Unlike its antecedents which only undertook teaching at Masters and Ph.D. levels, the new Department had committed itself to becoming more closely integrated with the other departments in the Faculty by contributing to undergraduate teaching as well. Its first course, AI2, a computational modelling course, was launched in 1974/75. This was followed by an introductory course, AI1, in 1978/79. By 1982, it was able to launch its first joint degree, Linguistics with Artificial Intelligence. There were no blueprints for these courses: in each case, the syllabuses had to be carved out of the body of research. It was during this period that the Department also agreed to join forces with the School of Epistemics, directed by Barry Richards, to help it introduce a Ph.D. programme in Cognitive Science. The Department provided financial support in the form of part-time seconded academic staff and studentship funding; it also provided access to its interactive computing facilities. From this modest beginning there emerged the Centre for Cognitive Science which was given departmental status by the University in 1985.

The third period of AI activity at Edinburgh begins with the launch of the Alvey Programme in advanced information technology in 1983. Thanks to the increasing number of successful applications of AI technology to practical tasks, in particular expert systems, the negative impact of the Lighthill Report had dissipated. Now, AI was seen as a key information technology to be fostered through collaborative projects between UK companies and UK universities. The effects on the Department were significant. By taking full advantage of various funding initiatives provoked by the Alvey programme, its academic staff complement increased rapidly from 4 to 15. The accompanying growth in research activity was focused in four areas, Intelligent Robotics, Knowledge Based Systems, Mathematical Reasoning and Natural Language Processing. During the period, the Intelligent Robotics Group undertook collaborative projects in automated assembly, unmanned vehicles and machine vision. It proposed a novel hybrid architecture for the hierarchical control of reactive robotic devices, and applied it successfully to industrial assembly tasks using a low cost manipulator. In vision, work focused on 3-D geometric object representation, including methods for extracting such information from range data. Achievements included a working range sensor and range data segmentation package. Research in Knowledge Based Systems included design support systems, intelligent front ends and learning environment. The Edinburgh Designer System, a design support environment for mechanical engineers started under Alvey funding, was successfully generalised to small molecule drug design. The Mathematical Reasoning Group prosecuted its research into the design of powerful inference techniques, in particular the development of proof plans for describing and guiding inductive proofs, with applications to problems of program verification, synthesis and transformation, as well as in areas outside Mathematics such as computer configuration and playing bridge. Research in Natural Language Processing spanned projects in the sub-areas of natural language interpretation and generation. Collaborative projects included the implementation of an English language front end to an intelligent planning system, an investigation of the use of language generation techniques in hypertext-based documentation systems to produce output tailored to the user's skills and working context, and exploration of semi-automated editorial assistance such as massaging a text into house style.

In 1984, the Department combined forces with the Department of Lingistics and the Centre for Cognitive Science to launch the Centre for Speech Technology Research, under the directorship of John Laver. Major funding over a five year period was provided by the Alvey Programme to support a project demonstrating real-time continuous speech recognition.

By 1989, the University's reputation for research excellence in natural language computation and cognition enabled it to secure in collaboration with a number of other universities one of the major Research Centres which became available at that time, namely the Human Communication Research Centre which was sponsored by ESRC.