views updated May 21 2018

PARSING [From the verb parse, from Latin pars/partis a part, abstracted from the phrase pars orationis part of speech].
1. Analysing a SENTENCE into its constituents, identifying in greater or less detail the syntactic relations and parts of speech.

2. Describing a WORD in a sentence, identifying its part of speech, inflectional form, and syntactic function.

Traditional parsing

Parsing was formerly central to the teaching of GRAMMAR throughout the English-speaking world, and widely regarded as basic to the use and understanding of written language. When many people talk about formal grammar in schools, they are referring to the teaching of parsing and CLAUSE ANALYSIS, which virtually ceased in primary and secondary education in the English-speaking world in the 1960s, and in tertiary education has been superseded by linguistic analysis. The argument against traditional parsing is threefold: that it promotes old-fashioned descriptions of language based on LATIN grammatical categories; that students do not benefit from it; and that it is a source of frustration and boredom for both students and teachers. The argument in favour of parsing is fourfold: that it makes explicit the structure of speech and writing, exercises the mind in a disciplined way, enables people to talk about language usage, and helps in the learning and discussion of foreign languages. A compromise position holds that the formal discussion of SYNTAX and function can be beneficial, but should take second place to fluent expression and the achievement of confidence rather than dominate the weekly routine.

Computational parsing

When a computer parses, it analyses a string of characters in order to associate groups in the string with the syntactic units of a grammar. Computers do this mostly for programming languages but also sometimes for English. Programming languages are defined by simple but precise grammars, and the translation of these languages into machine language requires knowing which rules apply to each statement. Typical grammars for computer languages take a few dozen rules and parse input at the rate of several seconds per statement. The grammars for such languages are designed to be unambiguous: only one ‘parse’ is possible for each statement. Computer scientists have often thought of applying similar techniques to natural language, but a language like English requires hundreds or thousands of rules, does not conform to the neat mathematical models that allow the rapid parsing of computer languages, often contains ambiguities, and has not yet been described in sufficient detail to be successfully parsed by a machine.


views updated May 18 2018

parsing (syntax analysis) The process of deciding whether a string of input symbols is a sentence of a given language and if so determining the syntactic structure of the string as defined by a grammar (usually context-free) for the language. This is achieved by means of a program known as a parser or syntax analyzer. For example, a syntax analyzer of arithmetic expressions should report an error in the string 1–+2

since the juxtaposition of the minus and plus operators is invalid. On the other hand the string 1–2–3

is a valid arithmetic expression with structure specified by the statement that its subexpressions are 1,2,3 and 1–2

(Note that 2–3 is not a subexpression.)

The input to a parser is a string of tokens supplied by a lexical analyzer. Its output may be in the form of a parse tree or a derivation sequence. See also bottom-up parsing, top-down parsing, precedence parsing.

About this article


All Sources -
Updated Aug 08 2016 About content Print Topic